|
1. |
Back matter |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 038-039
Preview
|
PDF (2612KB)
|
|
ISSN:0003-2654
DOI:10.1039/AN99419BP038
出版商:RSC
年代:1994
数据来源: RSC
|
2. |
Front cover |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 040-041
Preview
|
PDF (2646KB)
|
|
ISSN:0003-2654
DOI:10.1039/AN99419FX040
出版商:RSC
年代:1994
数据来源: RSC
|
3. |
Contents pages |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 042-043
Preview
|
PDF (219KB)
|
|
ISSN:0003-2654
DOI:10.1039/AN99419BX042
出版商:RSC
年代:1994
数据来源: RSC
|
4. |
Perspective. Statistics—the course of the analytical classes |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 127-127
Michael Thompson,
Preview
|
PDF (166KB)
|
|
摘要:
Analyst, October 1994, Vol. 119 127N The opinions expressed in the following article are entirely those of the author and do not necessarily represent the views of either The Royal Society of Chemistry or the Editor of The Analyst. Perspective Statistics-The Curse of the Analytical Classes Michael Thompson Department of Chemistry, Birkbeck College, University of London, Gordon House, 29 Gordon Square, London WClH OPP I am currently involved in a survey of statistical methods used by analytical chemists, undertaken for the VAM Programme. Recently, as part of this effort, I carefully read every paper in a current issue of The Analyst and a corresponding issue of the Journal of Analytical Atomic Spectrometry. Experience as a referee had taught me to expect some problems, but I was surprised to find flaws of a statistical or related nature in most of the papers where statistics was ancillary.Subsequently, I found a similar incidence in other analytical journals. Admittedly I was looking very carefully at the statistics, and many of the problems were minor. Even so, analytical chemists ought to be worried about the high incidence of problems, and try to find a remedy. It is more serious than simply avoiding an amateurish presentation: poor statistics sometimes masks poor science. Fortunately, despite their frequency, the problems encountered comprised a small range of types. Remedial action should therefore be simple to execute and effective. The following is a summary of my findings. ( i ) In 8% of the papers the statistics was simply incorrect: wrong conclusions were drawn as a result of faulty methodol- ogy (ii) In 20% of the papers statistics should have been used to summarize data or support conclusions, but were not.(iii) Rather strangely, in 32% of the papers the authors provided ‘token statistics’, i. e., statistics that were provided unnecessarily and were never used or even referred to. It was as if the authors had a vague idea that statistics was a good item to include in a paper, but did not know what to do with the results. (iv) In no less than 40% of the papers the authors had trouble with confidence limits: they were misunderstood, incorrectly defined, or simply not defined at all. ( v ) About 24% of the papers reported experiments that were weak or inappropriate in design, such that statistics could not be used legitimately to summarize the results.(vi) In many papers the authors used the correlation coefficient uncritically as a test for calibration linearity. (vii) In 16% of the papers the authors were confused about the distinction between regression and correlation. (viii) In 40% of papers, repeatability and reproducibility were confused with each other. This is a serious confusion as the two words have definite and distinct meanings. (ix) In 24% of papers the word ‘sample’ was used incorrectly. ‘Sample’ should not be used as a synonym for ‘test material’, ‘test portion’, ‘test solution’, ‘aliquot’, ‘analyte’ or ‘treated solution’. Analytical science probably needs a new catch-all word meaning ‘material subjected t o analysis’ but ‘sample’ should not be used for this purpose in written work.( x ) In 12% of papers there were graphs with ambiguous features. All lines, points and error bars on graphs must be defined or confusion will result. For example, a line could represent a theoretical relationship, a regression estimate, or some kind of unspecified ‘fit’ produced by an unknown graphics package, and readers need to be told which it is. What needs to be done about this state of affairs? The short-term aim should be to try to prevent statistical mistakes appearing in our journals. Authors should, therefore, con- sider the following suggestions. ( i ) Design experiments that will answer the questions you are investigating, and where statistics can be applied with validity to the results if necessary.Don’t just collect some data together on an ad hoc basis and then see if statistics helps to make sense of it. (ii) Use graphical presentation of the results, whether or not statistics are to follow. Graphs and plots (histograms, dot- plots, boxplots, letterplots and scatterplots) often say as much as statistics can but, more importantly, can demonstrate whether the data are suitable for statistical treatment (e.g., whether they conform to the assumptions underlying the statistical test). (iii) Avoid token statistics: do not provide statistics if you do not draw on them for your conclusions. (iv) Avoid the correlation coefficient unless you understand it. ( v ) Be careful about terminology. (vi) Find out what significance tests, confidence intervals (vii) If in doubt, ask a statistician. In the medium term it must be recognized that neither authors nor referees are sufficiently aware of the perils of inept statistics.This situation could be easily remedied, however. A short check list of points to look out for in papers could be prepared (perhaps by the Analytical Methods Committee) and published in the journal’s Znstructions to Authors. A copy could be sent to referees with the submitted papers, and the referees could simply tick items that needed the authors’ attention. This measure could substantially reduce the incidence of problems. As a long-term issue, the problem can be remedied only by better education. There is a clear deficiency in the way that many analytical chemists have been trained in statistics. Without a working knowledge of statistics the analytical chemist simply cannot get to grips with the issues of data quality that are so prominent in current practice at a professional level. However, that is another story, which must be told from a different platform. and regression really do before you use them. Reference 1 The DTI Initiative on Valid Analytical Measurement (VAM), Laboratory of the Government Chemist, Queens Road, Ted- dington. Middlesex, UK, TWll OLY. Letters for publication on other views on this subject are welcome and should be addressed to The Editor at the address given on the inside front cover.
ISSN:0003-2654
DOI:10.1039/AN994190127N
出版商:RSC
年代:1994
数据来源: RSC
|
5. |
Book reviews |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 129-132
R. C. Rooney,
Preview
|
PDF (673KB)
|
|
摘要:
Analyst, October 1994, Vol. 11 9 129N Book Reviews Basic Concepts of Analytical Chemistry By S. M. Khopkar. Pp. xvi + 368. Wiley (Eastern). 1993. ISBN 0-85226-461 -5. I am afraid that my first impression on picking up this book was of the poor quality of production; it brings home sharply how used we have become in the West to high quality paper and printing, good binding and so on. It is obvious that this book would have a very short life in the day to day life of the student or the laboratory. The back cover makes some optimistic claims in suggesting that a book of just over 350 pages can be the reference text for workers whose main concern is to keep pace with modern developments in such diverse fields as classical analytical methods, solvent extraction, GC, HPLC, ion exchange, UV-VIS, IR, AA and luminescence spectroscopy, not to mention NMR, ESR, mass spectrometry, electroanalytical chemistry, thermoanalytical methods, radiochemistry and on-line analysis and needless to say it does not succeed. The contents are very varied in quality; the first ten chapters take up 108 pages and are in my view the most useful part of the whole book.Chapter 1 gives an introduction to chemical analysis, Chapter 2 is a very good short treatment of reliability of data and Chapter 3 a useful short treatment of sampling and its importance. The next five chapters on gravimetric analysis, volumetric methods, redox, precipitation and complexometric titrations maintain the standard as do Chapters 9 and 10 on solvent extraction and ion exchange. The next few chapters on chromatography in all its forms are less attractive; the introduction to the technique lays down the foundations well but should hardly be necessary for the postgraduate student at whom the book is said to be aimed, and the actual chapters on the individual techniques contain too little information to be of much use to him.There are 25 pages on chromatography in general foliowed by descriptions of adsorption and partition chromatography, but the author manages to dispose of gas chromatography in 6 pages and HPLC in just over 4. ‘I would like to be able to recommend this book purely for the sake of the f i t few chapters, but I am afraid that the other two thirds prevent me from doing so.’ This then sets the tone for the rest of the book, with skimpy and in some cases doubtful information on a wide range of analytical techniques.I would take issue with the statement that the introduction of the various spectroscopic methods made trace analysis a simple matter, and I was unhappy to see fluorescence described as owing to the scattering of the incident light in the introductory chapter on spectroscopic methods, although to be fair it is not then included in Chapter 20 on light scattering methods but in the next chapter on molecular luminescence spectroscopy. The chapters on atomic spectroscopy leave much to be desired, and give the impression of being based on manufac- turer’s literature rather than the author’s own experience. No mention is made of emission spectroscopic detection systems other than the photographic plate, and plasma sources are mentioned only once and that in passing; almost as much space is devoted to flame photometry as to all other types of atomic emission techniques.The chapter on atomic absorption suffers from the same general criticisms; few analysts actually in the field would, for example, consider filter flame photometers comparable to an atomic absorption spectrophotometer in terms of general usefulness! It would be an unfortunate student who took as gospel statements such as ‘generally a flame is not preferred as atomizer due to interferences and striking back effects’ and ‘non-flame methods have to be preferred to the use of a flame’. In electrothermal atomic absorption, the student is informed that ‘a sample of 1 to 2 PI is used’ and ‘as source of radiation a continuous source is required’.Perkin-Elmer would probably be unhappy with the statement that in atomic absorption a double beam instrument is uncommon, espe- cially in a book written in the early 1980s. Most of the remaining chapters suffer from similar defects and I feel that if an author can only devote five pages to the entire gamut of thermoanalytical techniques and eight pages to mass spectrometry in all its forms, he would have been better advised to have mentioned them only in passing and referred the student to better works on the subjects. Overall, I would like to be able to recommend this book purely for the sake of the first few chapters, but I am afraid that the other two thirds prevent me from doing so. R. C. Rooney Rooney Laboratories Ltd.Basingstoke, UK ~~ Bioluminescence and Chemiluminescence Edited by A. A. Szalay, L. J. Kricka and P. Stanley. Pp. xviii + 548. Wiley. 1993. Price f90.00. ISBN 0-471-94164-6. This volume contains the proceedings of the VIIth Interna- tional Symposium on Bioluminescence and Chemilumines- cence that was held in Banff, Canada, March 14-18, 1993. It followed previous symposia in the series (for which the proceedings were also published in book form) in Brussels (1978) , San Diego (1980) , Birmingham (1984) , Freiburg (1986) , Florence (1988) and Cambridge (1990). The sympo- sium covered a wide range of fundamental and applied aspects of the subject but the main themes of the meeting were the molecular biology of bioluminescence and low light level imaging techniques.The contents are divided into five sections and cover the following subject areas: instrumentation for light detection (10 papers); molecular biology and biochemistry (37 papers); chemiluminescent and bioluminescent assays (42 papers); cellular luminescence (12 papers) ; and general chemilumines- cence and bioluminescence (8 papers). There is also a brief combined subject and author index. All of the papers are compiled from camera ready manuscripts (mostly of five pages length) and were not subjected to peer review prior to publication. ’provides a series of ‘snap shots’ of the current status of bioluminescence and chemilumin- escence.’ The two sections of most interest to analytical chemists are those on instrumentation and chemiluminescence/biolumines- cence assays.The predominant trend in the instrumentation section is the discussion of CCD (charge coupled device) detection systems for imaging purposes, although portable luminometers are also considered. The assay section has a significant clinical bias and the methods are based predominantly on batch procedures.130N Analyst, October 1994, Vol. 119 Applications include the determination of antioxidant activ- ity, immunoassays (e.g., for progesterone, free thyroxine and testosterone), DNA-based assays and enzyme coupled biolu- minescence assays. There is only one paper on flow injection and high-performance liquid chromatography and this de- scribes the electrochemiluminescent determination of bio- logically significant substances but contains no substantial results.There are two interesting articles that point the way to a widening of the applications base for chemiluminescence and bioluminescence assays, namely rapid and simple chemi- luminescence assays for water quality monitoring and the bioluminescence ATP assay for microbiological monitoring in foods and beverages. Overall this is a comprehensive and well presented account of the proceedings of the symposium and as such provides a series of ‘snap shots’ of the current status of bioluminescence and chemiluminescence. The papers are too brief and the focus is too much on fundamental processes, however, for the book to be of major interest to the general analytical chemistry readership. Paul Wo rs fold Department of Environmental Sciences University of Plymouth, UK DECHEMA Corrosion Handbook.Corrosive Agents and Their Interaction With Materials. Volume 12. Chlorinated Hydrocarbons-Chloroethanes, Phosphoric Acid Edited by Gerhard Kreysa and Reiner Eckermann. Pp. ix + 352. VCH. 1993. Price DM775.00; f286.00. ISBN 3-527- 26663-1 (VCH, Verlagsgesellschaft); 0-89573-633-0 (VCH, Publishers). All materials corrode to some extent, and in this materialistic world it is of great concern to us all as it poses a world-wide economic problem. To reduce, inhibit, or prevent corrosion requires knowledge of the chemical and mechanical properties of the material in passive and aggressive media, but finding this knowledge is proving more and more difficult as new materials are developed and old ones modified for new applications.Of the small number of corrosion sourcebooks or handbooks, one of the best is the Dechema Corrosion Handbook, a translation of the Dechema-Werkstoff-Tablelle which has been continually updated for the past 40 years. Volume 12 deals with the corrosive agents, chlorinated hydrocarbons and phosphoric acid, their synthesis, properties, and interactions with all the familiar metallic materials, non-metallic materials, organic materials, and materials with special properties. ‘Of the small number of corrosion sourcebooks or handbooks, one of the best is the Dechema Corrosion Handbook, a translation of the Dechema-Werkstoff-Tablelle which has been continually updated for the past 40 years.’ This volume is a rich treasure of information and guidance distilled from thousands of official and unofficial sources.It is well written, presented, and indexed, and should be essential reading to production, material, corrosion, and process engineers and chemists, and to designers, constructors, and analysts who are involved in the fabrication and in the use of materials for such corrosive media. J . B. Craig Department of Chemistry University of Aberdeen, UK Surface Characterization of Advanced Polymers Edited by Luigia Sabbatini and Pier Giorgio Zambonin. Pp. xiv + 306. VCH. 1993. Price DM220.00; f73.00. ISBN 3-527-28512-1 (VCH, Weinheim), ISBN 1-56081-270-2 (VCH, New York) This book, so the copy on the cover states, ‘provides a comprehensive approach to the surface analysis of polymers of technological interest by means of modern electron and ion spectroscopies (XPS, TOF-SIMS, ISS, HREELS)’.The four techniques considered in this book are at rather different stages of development; X-ray photoelectron spectroscopy (XPS) has been applied to polymer surfaces for over 20 years and can be considered a mature analytical technique; secon- dary ion mass spectrometry (SIMS) on the other hand is just over a decade away from its first application to polymers and is now widely used in polymer surface analysis; high resolution electron energy loss spectroscopy (HREELS) and ion scatter- ing spectroscopy (ISS) are relative newcomers and their application in polymer research is still the preserve of a few groups worldwide. The coverage in the book reflects this level of seniority with XPS receiving around do%, SIMS around a quarter and the other two substantially less.The first chapter of the book, by Desimoni and Zambonin, reviews the basic principles of the four methods and provides a good introduction to the following chapters. Of particular use is the presentation of technique summaries in tabular form, enabling the newcomer to gain an impression of the role of each of the techniques in polymer analysis. This chapter concentrates, as perhaps it should, on the mechanics of the techniques leaving the subtleties to the individual experts responsible for subsequent chapters. The contributions by Pireaux (HREELS) and Gardella and co-workers (ISS) are admirable reviews of the current state of the art of these emerging techniques. Although yet to be routinely applied in polymer research, it is clear that both methods have important contributions to make and these authors are at the forefront of advances in HREELS and ISS.It is timely to provide such reviews as they indicate to all those in polymer surface characterization the future potential of the methods. ‘This chapter is without doubt the most thorough exposition of polymer SIMS published to date’ The chapter on SIMS of polymer surfaces [not merely time of flight (T0F)-SIMS as suggested on the cover] by Reed and Vickerman is a major tour-de-force, and accounts for almost a quarter of the book. The authors take the reader through all the important aspects of polymer SIMS; ion generation, static conditions for polymers, spectral interpretation and quantifi- cation, as well as mentioning important related fields such as tandem MS for the elucidation of particularly complex spectral components.The reference list is extensive (3 pages) and provides the reader with easy access to all the benchmark publications in polymer SIMS. This chapter is without doubt the most thorough exposition of polymer SIMS published to date and is, perhaps, a reason in itself for acquiring the volume. The coverage of XPS, in four chapters amounting to almost 40% of the book is somewhat akin to the proverbial curate’s egg. The chapter by Sherwood on data analysis is excellent and is essentially an update of his 1990 review. However this chapter is for the XPS expert rather than the novice. The contribution from Chilkoti and Ratner covers the use of derivatization methods, essentially for XPS use but also SIMS.This reviews a difficult area very well and sets out the possible reagents and procedures in a clear and concise manner. TheAnalyst, October 1994, Vol. I 19 131N chapter from the Bari Group (including the two editors) on conducting polymers is a comprehensive bibliography of the XPS of conducting polymers, but offers no assistance on some of the contradictory evidence in the literature. This is a shame as it detracts from the over-all appeal of the chapter, and the authors must be in an ideal position to redress some of the misconceptions that arose from early work on these materials. What is missing from this book, is an intermediate level discourse on XPS. Where, for instance, are examples of valence band XPS, the use of different photon sources for depth profiling, angle resolved XPS used in a quantitative manner to produce compositional depth profiles, the use of the JC -+ JC* shake-up satellite to estimate the degree of aromaticity, imaging XPS, and so on? It appears that the authors have fallen into the all too easily accessible trap when dealing with a well developed analytical technique.They have allowed their chosen writers (all international experts in their own right) to concentrate on their own specialities, rather than aiming to give a thorough coverage of the technique. In spite of this, fairly substantive, criticism the editors have gone a long way to achieving their stated aims with this book, which will be invaluable to those entering the field of polymer surface analysis. It certainly warrants purchase by scientific and technological libraries and the review copy has received extensive use in the reviewer’s laboratory since it arrived several months ago! John F.Watts Department of Materials Science and Engineering University of Surrey, UK Molecular Interactions in Bioseparations Edited by That T. Ngo. Pp. xvii + 570. Plenum. 1993. Price US$95.00. ISBN 0-306-44435-6. It is unfortunate the title of this book will mislead potential readers seeking their information through titles containing the now generic word of ‘affinity’. Although the title properly describes the underlying thesis of the affinity technique, the book is wholly concerned with affinity science. It really is a pity ‘affinity’ is not in its title; it is probable a wider audience would be drawn.The contents are an excellent up-to-date compilation of the broad spectrum of approaches now available and represents one of the most complete biblio- graphies of affinity technologies. The division into six sections is useful in directing readers into specific areas. In recent years indexes in many books have either been poorly prepared or so limited it makes it difficult to search for specific information. This trap is to an extent avoided by this division although, again, indexing could have been more comprehensive. After the introductory chapter the remainder is neatly separated into applications involving biological ligands, immunoseparations, biomimetic ligands, novel concepts and affinity-related techniques.‘The contents are an excellent up-to-date compi- lation of the broad spectrum of approaches now available and represents one of the most com- plete bibliographies of af€jijity technologies.’ The inclusion of novel and related techniques is particularly valuable and is a most useful demonstration of the very wide applicability of affinity technologies. Most users only meet the technique when separating proteins through chromatography columns. Those users may then be surprised to find affinity devices are commonly used for the extracorporeal extraction of recirculating blood plasma. Extracorporeal medical devices places the spotlight on highest possible quality requirements; inert supports, bonding chemistries and potential leakage are all examined in some detail.The introduction of this very readable section will surely assist in laying the perennial ghost of leakage, perceived as the primary reason affinity matrices should not be used for the manufacture of protein phar- maceuticals. The perception all affinity media leak potentially toxic products is very widely held, particularly in the USA, (much less so in Europe). The continuous recirculation of live patients’ blood is probably the most exacting test of all in terms of leakage and toxicity. Furthermore, the seal of approval has been given by regulatory authorities, including the FDA. The versatility of extracorporeal extraction is exemplified by citing the removal of toxic metals, toxic chemicals (e.g., Paraquat), antibodies and others. Although this section may be of peripheral interest to specialists in affinity Chromatography, it will assist to defeat the erroneous perception leakage is an inevitable feature of affinity systems.Ion exchange, hydrophobic interaction and gel filtration currently dominate protein separations but are recognized as outdated when compared with the inherent high efficiencies and attractive economics of affinity systems. Comparison between these and affinity technologies was not developed. Although not a major omission in this otherwise excellent review of broad-spread affinity applications it is admitted the editor is working under difficult circumstances. The earliest publishers of applications data are usually academics but they have limited opportunity (or desire) to study manufacturing costs, the primary driving force favouring affinity systems. In contrast very few companies publish the economic advantages of their processes for competitive reasons.The inclusion of a chapter specifying technical and economic advantages would have added an extra dimension and broadened the appeal to pilot and process engineers. The editing is excellent; not a single typographic error was detected. The cover price is lower than most recent books covering affinity technologies and represents good value. K . Jones AfSin ity Chromatography L td. Isle of Man Bioaffinity Chromatography. Second Edition By Jaroslava Turkova. Journal of Chromatography Library. Volume 55. Pp. xviii + 800. Elsevier. 1993. Price DF1495.00; U S$282.75. ISBN 0-444-89030-0.This edition is a complete rewrite of the 1st edition, published in 1978, and reflects the explosion of interest in affinity chromatography as a specific method to isolate biological molecules. Since the last edition the use of antibodies, covalently linked to solid supports, as a means of purifying antigens, has become widespread owing to the availability of large amounts of monoclonal antibodies through hybridoma technology. The fact that antibodies can be obtained to almost any ligand, from small organic compounds to biological macromolecules, means that adsorption onto immobilized antibodies is now a favoured method for biospecific chromato- graphy. There have also been advances in classical affinity chromatography over immobilized ligand, particularly with the variety of solid supports available and the use of dyes as ligands.These aspects are well covered in this book. As would be expected in a volume about affinity chromato- graphy there are chapters on choices of ligands and supports, coupling procedures and elution conditions. There is a chapter on immunoassay which seems out of place in a book about chromatography. The strength of this book is the extensive132N Analyst, October 1994, Vol. 119 (200 pages) tabulation of examples from the literature. Thus, this makes a suitable starting point for those wishing to develop a method for their own particular separation prob- lem. The size of this table is also, in some ways, a weakness making it difficult to locate entries of a particular type. For example, I wanted to find references on the use of amino- propyl-activated glass as a support.I had to wade through the whole table in order to pick out the required few references. Some sort of cross indexing would have been helpful. ‘a major reference work on affjdty chromatography.’ A minor criticism of this book and indeed of other books on affinity chromatography, concerns the lack of guidance on the best techniques for a researcher starting from scratch with a separation problem. What are the best solid supports, spacer arm lengths, elution conditions etc.? Where does the bio- chemist who wants to use an affinity step in the purification of hidher enzyme or receptor start? Maybe this is an unfair question that cannot be simply answered because of the large number of variables involved.In my experience, unless affinity chromatography works after a small number of attempts, the researcher usually gives up without trying further variations. Undoubtedly, this book is a major reference work on affinity chromatography that deserves a place in any bio- chemistry library. The author is to be congratulated for the enormous amount of work that must have gone into it. Michael H . Beale Deptartment Agricultural Sciences University of Bristol, UK Physics, Chemistry, and Technology of Solid State Gas Sensor Devices By Andreas Mandelis and Constantinos Christofides. Volume 125 in Chemical Analysis: A Series of Mono- graphs on Analytical Chemistry and Its Applications. Pp. xxiii + 324. Wiley. 1993. Price f58.00. ISBN 0-471-55885-0. In recent years the heightened general awareness of safety and environmental issues has led to an upsurge in the volume of related research in areas such as solid-state gas sensors.As the authors of this book rightly point out, the pace of these developments has been such as to allow little opportunity for the publication of associated textbooks or monographs that provide a co-ordinated view of the field. In such a context, this monograph makes a timely and welcome addition to the small number of general texts that deal with the subject of solid-state gas sensors. After brief introductory and general theory (gas-surface physico-chemical interactions) chapters, all of the major device types are treated according to their generalized mode of operation, with separate chapters on sensors that utilize semiconductive, photonic/photoacoustic, fibre-optic, pie- zoelectric, surface acoustic wave and thermaYpyroelectric operating principles.The typical format for each chapter includes a short historical perspective, a detailed treatment of operational theory, a review of relevant experimental results from the literature, and in some cases, fabrication technolo- gies and device performance comparisons for the general sensor type in question. The book concludes with short chapters on future trends and on performance comparisons of the different sensor types for the detection of hydrogen. ‘this monograph makes a timely and welcome addition to the small number of general texts that deal with the subject of solid-state gas sensors.’ The authors attempt, with some success, to use as a unifying theme throughout the book: the various ways in which the hydrogen-palladium interaction can be exploited in different sensor types.While this approach is certainly useful in serving to focus discussion on the many disparate types of solid-state gas sensors and in comparing their performances, it inevitably leads to some duplication when considering their theories of operation. It should also be said that the detection of gases other than hydrogen is dealt with, in general, only to a limited extent, as is the treatment of chemiresistive gas sensors (a mere 13 pages in the latter case). Both of these deficiencies are mitigated by the understandable need to realize a monograph of manageable size, and indeed it could also be argued that the subject of chemiresistors has been generously treated in the other available texts on chemical sensors. Mandelis and Christofides have made the book attractive to the reader from a presentational viewpoint, by the extensive use of diagrams and graphs. The over-all effect would have been enhanced further, however, if all of the figures were of a uniform high quality. Another positive feature of this mono- graph is the large number of useful references (over 650, approximately 20 of which are post-1990). As someone who is involved in teaching and research on solid-state gas sensors, I can say that this book will be invaluable as a guide for professional researchers and users of gas sensor technology. Whilst one can envisage that the monograph (in particular the theoretical sections) will find usage in undergraduate sensor courses, the cost may be somewhat prohibitive for wide- spread adoption as a course text. James McMonagle Chemical and Life Sciences Department University of Limerick, Ireland
ISSN:0003-2654
DOI:10.1039/AN994190129N
出版商:RSC
年代:1994
数据来源: RSC
|
6. |
Conference diary |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 133-137
Preview
|
PDF (449KB)
|
|
摘要:
Analyst, October 1994, Vol. 119 133N Conference Diary Date Conference 1994 November Locat ion 2 2-4 6-12 7-9 9-1 1 10 10-11 13-18 14 16 16 17 17 18-22 24-26 Spectroscopy in Process Analysis Hull, UK 14th International Symposium on the Heidelberg, Preparation and Analysis of Proteins, Peptides Germany and Polynucleotides (ISPPP '94) Third Rio Symposium on Atomic Spectrometry Caracas, Venezuela Impact of Nucleic Acid-based Technology: Amsterdam, Revolution in Clinical Diagnosis, Applications The Netherlands and Research 11th Montreux Symposium on Liquid Chromatography-Mass Spectrometry Switzerland (LCNS; SFCNS; CE/MS; MSNS) Calorimetry and Thermal Methods Applied to London, Montreux, Construction Materials 17th International Conference on Chemistry, Bio Sciences, and Environmental Pollution Annual Eastern Analytical Symposium Fundamentals of Toxicology and Chemical Risk Assessment Measurement of Radioactivity Fundamentals of Toxicology and Chemical Risk Assessment Impact of the New Biophysics on Structural Biochemistry The Development of Chromatographic Methods Joint Oil Analysis Program International Condition Monitoring Conference 5th International Symposium on Advances in Electrochemical Science and Technology UK New Delhi, India New Jersey, USA Newcastle, UK Weybridge, UK Aberdeen, UK London, UK Greenford, UK Pensacola, USA Madras, India Contact Dr.J. S. Lancaster, BP Chemicals Ltd., Saltend, Hull, UK HU12 8DS Tel: +44 (0) 482 894803. Fax: +44 (0) 482 892266 Secretariat ISPPP '94, BO Conference Service, P.O. Box 100 78, S-77010 Uppsala, Sweden Tel: +46 18 165 060.Fax: +46 18 304 074 Professor Jose Alvarado, Universidad Simon Bolivar, Departamento de Quimica, Laboratorio de Absorcion Atomica, Apartado Postal No. 89000, Caracas, 1080-A, Venezuela Fax: +58 2 938322 Ben Keddy, Cambridge Healthtech Institute, Bay Colony Corporate Center, 1000 Winter Street, Suite 3700, Waltham MA 02154, USA Tel: +1617 487 7989. Fax: +1 617 487 7937 M. Frei-Hausler, Postfach 46, CH-4123 Allschwil 2, Switzerland Tel: +41 61 4812789. Fax: +41 61 4820805 SCI Conference Secretariat, 14/15 Belgrave Square, London, UK SWlX 8PS Tel: +44 (0) 171 235 3681. Fax: +44 (0) 171 823 1698 Dr. V. M. Bhatnagar, Alena Chemicals of Canada, P.O. Box 1779, Cornwall, Ontario, Canada K6H 5V7 Tel: +1613 932 7702. EAS, P.O. Box 633, Montchanin, DE 19710 0633, USA Tel: +1 302 453 6218.Fax: +1302 738 5275 BICS International, City Headquarters, 1st Floor, Chandos House, 12-14 Berry Street, London, UK EClV OAQ Tel: +44 (0) 171 490 2076. Fax: +44 (0) 171 490 2086 Dr. P. Warwick, Department of Chemistry, Loughborough University, Lejcester, UK LE11 3TU Tel: +44 (0) 509 222585. Fax: +44 (0) 509 233163 BICS International, City Headquarters, 1st Floor Chandos House, 12-14 Berry Street, London, UK EClV OAQ Tel: +44 (0) 171 490 2076. Fax: +44 (0) 171 490 2086 Mr. A. J. Crooks, Honorary Secretary, Royal Society of Chemistry, Burlington House, Piccadilly, London, UK W1V OBN Tel: +44 (0) 722 334974. Dr. Diana Simpson, Analysis For Industry, Factories 2/3, Bosworth House, High Street, Thorpe-le-Soken, Essex, UK C016 OEA Tel: +44 (0) 255 861714.Technical Support Center, Joint Oil Analysis Program, Building 780, Naval Air Station, Pensacola, FL 32508, USA Tel: + 1 904 452 3191. The Secretary, Society for Advancement of Electrochemical Science and Technology, Karaikudi, 623 006, India134N Analyst, October 1994, Vol. 1 I9 Date Conference Location Contact 30 Clinical Applications of Electroanalysis Edgbaston UK December 5-6 Cell Adhesion Molecules and Inflammation: San Francisco, New CAMS, Applications and New Drug Development USA 5-7 Analytical Quality Control and Reference Rome, Materials: Life Sciences Italy 13-16 DNA-Fingerprinting: 3rd International Hyderabad, Conference India 15 Recent Advances in Technologies for the Study London, of Drug Metabolism UK 20 Quantitative LCMS London, UK 1995 January 8-13 1995 Winter Conference on Plasma Cambridge, Spectrochemistry UK 18-20 Validation in Pharmaceutical Analysis York, UK 29-2/2 7th International Symposium on High Wiirzburg , Performance Capillary Electrophoresis (HPCE Germany '95) February 6-8 International Conference on Arsenic in Calcutta, Ground Water: Cause, Effect and Remedy India 7-10 4th International Conference on Automation, Montreux, Robotics and Artificial Intelligence Applied to Switzerland Analytical Chemistry and Laboratory Medicine by the Montreal Protocol UK 15 Alternatives to Chemical Solvents Restricted London, 19-24 OFC '95: Optical Fibre Communication San Diego, Conference USA March 6-10 PITTCON '95, Pittsburgh Conference On New Orleans, Analytical Chemistry and Applied USA Spectroscopy 9-10 Advances in Genetic Screening and Diagnosis San Fransisco, of Human Diseases USA A.E. Bottom, ABB Kent-Taylor Ltd., Oldends Lane, Stonehouse, UK GLlO 3TA Tel: +44 (0) 453 826661. Fax: +44 (0) 453 826358 Ben Keddy, Cambridge Healthtech Institute, Bay Colony Corporate Center, 1000, Winter Street, Suite 3700, Waltham, MAD 2154, USA Tel: + 1 617 487 7989. Fax: + 1 617 487 7939 Roberto Morabito, ENEA Cassaccia, Environmental Deparment , Via Anguillarese 301, I-OOO60 Rome, Italy Tel: +39 6 304 84933. Fax: +39 6 304 86571 Dr. Lalji Singh, Centre for Cellular and Molecular Biology, Hyderabad 500 007, India Fax: +91 40 85 1195 Mr. A. Crooks, "Cartref", 35 Queensberry Road, Salisbury, Wiltshire, UK SP1 3PH Tel: +44 (0) 722 334974.Dr. C. Eckers, SmithKline Beecham Pharmaceuticals, The Fythe, Welwyn, Hertfordshire, UK AL6 9AR Janice M. Gordon, Winter Conference on Plasma Spectrochemistry , Royal Society of Chemistry, Thomas Graham House, Science Park, Milton Road, Cambridge, UK CB4 4WF Tel: +44 (0)223 420066. Fax: +44 (0)223 420247 Dr. J. A. Clements, Room 403, Royal Pharmaceutical Society of Great Britain, 1, Lambeth High Street, London, UK SE17JN Shirley E. Schlessinger, HPCE' 95, Suite 1015,400 East Randolph Drive, Chicago, IL 60601, USA Tel: +1 312 527 2011. D. Chakraborti, School of Environmental Studies, Jadavpur University, Calcutta 700 032, India Tel: +9133 473 5233. Fax: +9133 473 4266 SCITEC, Avenue de Provence 20, CH-1000 Lausanne 20, Switzerland Tel: +4121624 1533. Fax: +4121624 1549 Ms.Paula Elliott, Secretary, Analytical Division, The Royal Society of Chemistry, Burlington House, Piccadilly, London, UK W1V OBN Tel: +44 (0)171437 8656. Fax: +44 (0)171734 1227 Meetings Department, Optical Society of America, 2010 Massachusetts Avenue, NW, Washington, Tel: +1202 223 9034. Fax: +1202 416 6100 DC 20036- 1023, USA Pittsburgh Conference, Suite 332,300 Penn Centre Boulevard, Pittsburgh, PA 15235-9962, USA Ben Keddy, Cambridge Healthtech Institute, Bay Colony Corporate Center, 1O00, Wirter Street, Suite 3700, Waltham, MA 02154, USA Tel: +1 617 487 7989. Fax: +1617 487 7937Analyst, October 1994, Vol. I I9 135N Date Conference Location Contact 13-16 Trace Elements, Free Radicals, Cytokines, Kuwait City, Chromosomal Analysis and Tumour Markers Kuwait in Clinical Medicine and Biochemistry 28-31 Scanning 95 Seventh Annual International California, Microscopy Meeting USA 28-30 Applications of Modem Mass Spectrometric Swansea, Methods to Plant Science Research UK 28-31 SCANNING 95 29-30 Atomic Spectrometry Updates California, USA Bristol, UK April 3-6 7th Instrumental Analysis Symposium Madrid, Spain 10-13 Annual Chemical Congress (with Analytical Edinburgh, Session) UK 23-25 6th International Symposium on St.Louis, Pharmaceutical and Biomedical Analysis USA 26-28 6th International Symposium on Chiral St. Louis, Mscrimination USA May 3 New Techniques in Bioanalysis Bradford, UK 7-11 86th AOCS Annual Meeting & Expo Texas, USA 7-11 Seventeenth International Symposium on Virginia, Capillary Chromatography and USA Electrophoresis 9-12 Metal Compounds in Environmental and Life Julich, &Analysis, Speciation and Specimen Banking Germany 16-18 Fourth International Conference on Progress Luxembourg in Analytical Chemistry in the Steel and Metals Industry 21-26 CLEO '95: Conference on Lasers and Electro- Baltimore, Optics USA 21-26 QELS '95: Quantum Electronics and Laser Baltimore, Science Conference USA Hussain Dashti, Department of Surgery, Faculty of Medicine, Kuwait University, P.O.Box 24923, Safat, Kuwait Fax: +965 531 8454 Mary K. Sullivan, Foundation for Advances in Medicine and Science, P.O. Box 832, Mahwah, NJ 07430 0832, USA Tel: +010 1201 818 1010. Fax: +010 1 201 818 0086 Dr. R. P. Newton, Biochemistry Group, School of Biological Sciences, University College, Swansea, Wales, UK SA2 8PP Tel: +44(0) 792 295 377. Fax: +44(0) 792 295 447 Mary L.Gilmour, FAMS, P.O. Box 832, Mahwah, NJ 07430 0832, USA Tel: +1201818 1010. Fax: +1201818 0086 J. R. Dean, Department of Chemical and Life Sciences, University of Northumbria at Newcastle, Ellison Building, Newcastle upon Tyne, UK NE1 8ST Tel: +44 (0) 91 227 3517. Fax: +44 (0) 91 227 3519 7- Jomadas de Analisis Instrumental (JAI) Expoanalitica & Biocienca, Arda Reina Ma Cristina, Palacio no. 1, 08004-Barcelona, Spain Tel: +34 3 423 3101. Fax: +34 3 423 6348 Dr. J. F. Gibson, The Royal Society of Chemistry, Burlington House, Piccadilly , London, UK W1V OBN Tel: +44 (0)171437 8656. Fax: +44 (0)171 734 1227 Shirley Schlessinger, 400, East Randolph Street, Suite 1015, Chicago, Illinois 60601, USA Tel: +010 1 312 527 2011.Shirley Schlessinger, 400, East Randolph Street, Suite 1015, Chicago, Illinois 60601, USA Tel: +010 1 312 527 2011. A. J. Crooks, 'Cartref', 35 Queensbury Road, Salisbury, Wiltshire, UK SP1 3PH Tel: +44 (0) 722 334974. AOCS Education/Meetings Department, P.O. Box 3489, Champaign, IL 61826-3489, USA Tel: +010 1217 359 2344. Fax: +OlO 1217 351 8091 Dr. Milton L. Lee, Department of Chemistry, Brigham Young University, Provo, Tel: + 1 801 378 2135. Fax: + 1 801 378 5474 H. W. Durbeck, Institute of Applied Physical Chemistry, Research Center Julich (KFA), P.O. Box 1913, D-5170 Julich, Germany R. Jowitt, British Steel plc, Technical, Teesside Laboratories, P.O. Box 11, Grangetown, Middlesbrough, Cleveland, UK TS6 6UB Fax: +44 (0)642 460321 Meetings Department, Optical Society of America, 2010 Massachusetts Avenue, NW, Washington, Tel: +1202 223 9034.Fax: +1202 416 6100 Meetings Department, Optical Society of America, 2010 Massachusetts Avenue, NW, Washington, Tel: +1202 223 9034. Fax: +1202 416 6100 UT 84602-4672, USA DC 20036-1023, USA DC 20036-1023, USA.136N Analyst, October 1994, Vol. I1 9 Date Conference Location Contact 21-26 ASMS Conference on Mass Spectrometry Atlanta, ASMS, 815 Don Gaspar, Santa Fe, NM 87501, USA USA Tel: +1 505 989 4517. 28-2/6 19th International Symposium on Column Innsbruck, HPLC '95 Secretariat, Tyrol Congress, Liquid Chromatography Austria Marktgraben 2, A-6020 Innsbruck, Austria Tel: +43 512 575600. Fax: +43 512 575607 June 5-8 5th Symposium on our Environment and 1st Convention City, Asia-Pacific Workshop on Pesticides Singapore Environment, c/o Department of Chemistry, The Secretariat, 5th Symposium on our National University of Singapore, Kent Ridge, Republic of Singapore 05 11 Fax: +65 779 1691 July 9-15 SAC95 Hull, UK Analytical Division, The Royal Society of Chemistry, Burlington House, Piccadilly , London, UK W1V OBN Tel: +44 (0)71 437 8656.Fax: +44 (0)71 734 1227 10-13 Vth COMTOX Symposium on Toxicology and Vancouver, F. William Sunderman, Jr., M.D., Departments of Clinical Chemistry of Metals Canada Laboratory Medicine and Pharmacology, University of Connecticut Medical School, P. 0. Box 1292, Farmington, CT 06034-1292, USA Tel: + 1 203 679 2328. Fax: + 1 203 679 2154 August 20-25 12th International Symposium on Plasma Chemistry 27-219 CSI XXIX: Colloquium Spectroscopicum Internationale 27-1/9 46th Annual Meeting of the International Society of Electrochemistry (ISE46) 27-30 EUROTOX Minneapolis, USA Leipzig, Germany Xiamen, China Prague, Czech Republic L.Graven, 315 Pillsbury Drive, SE, University of Minnesota, Minneapolis, MN 55455-0139, USA Tel: +1 612 625 9023. Fax: +1 612 626 1623 GDCh-Geschiiftsstelle, Abt. Tagungen, Varrentrappestr. 40-42, Postfach 90 04 40, D-6000 Frankfurt am Main 90, Germany Tel: +49 69 791 7358. Fax: +49 69 791 7475 Secretariat, XLVIth ISE Annual Meeting, P.O. Box 1995, Xiamen University, Xiamen 361005, China Tel: +86 592 208 5349. Fax: +86 592 208 8054 Czech Medical Association J. E. Purkyni!, EUROTOX '95, P.O.Box 88, SokolskA 31,120 26 Prague 2, Czech Republic Tel: +42 2 24 915195. Fax: +42 2 24 216836 September 10-14 Ion-Ex '95, The Fourth International Wrexham, Ion-Ex '95 Conference Secretariat, Faculty of Conference and Industrial Exhibition on Ion Exchange Processes UK Science, The North East Wales Institute, Connah's Quay, Deeside, Clwyd, UK CH5 4BR Fax: +44 (0) 244 814305 25-28 5th Symposium on 'Kinetics in Analytical Moscow, Dr. I. F. Dolmanova, Analytical Chemistry Chemistry' (KAC '95) Russia Division, Chemical Department, Lomonosov Moscow State University, 119899 Moscow, Russia Tel: +7 095 939 3346. Fax: +7 095 939 2579 October 1-5 21st World Congress of the International The Hague, Mrs. J. Wills, ISF Secretariat, P.O. Box 3489, Champaign, IL 61826-3489, USA Tel: +0101217 359 2344.Fax: +0101217 351 8091 Society for Fat Research (ISF) The Netherlands November 5-10 OPTCON '95 San Jose, USA Meetings Department, Optical Society of America, 2010 Massachusetts Avenue, NW, Washington, Tel: +1202 223 9034. Fax: +1202 416 6100 DC 20036-1023, USAAnalyst, October 1994, VoE. 119 137N Date Conference Location Contact 14-15 International Conference for Chemical Manchester, Dr. M. P. Coward, Chemistry Department, Information Users UK UMIST, P.O. Box 88, Manchester, UK M60 1QD Tel: +44 (0) 61 200 4491. Fax: +44 (0) 61 228 1250 December 17-22 International Symposium on Environmental Hawaii, K. S. Subraimanian, Environmental Health Biomonitoring and Specimen Banking USA Directorate, Health Canada, Tunney's Pasture, Ottawa, Ontario, Canada K1A OL2 Tel: +010 1613 957 1874.Fax: +010 1613 941 4545 1996 January 8-13 1996 Winter Conference on Plasma Spectrometry Florida, USA February 6-9 Fourth International Symposium on Bruges, Hyphenated Techniques in Chromatography Belgium (HTC 4); Hyphenated Chromatographic Analysers May 7-9 June 16-21 July 8-12 VIIth International Symposium on Monte-Carlo, Luminescence Spectrometry in Biomedical Monaco Analysis-Detection Techniques and Applications in Chromatograph and Capillary Electrophoresis HPLC '96: 20th International Symposium on High Performance Liquid Chromatography USA San Francisco, XVI International Congress of Clinical Chemistry UK London, September 1-7 Euroanalysis IX Bologna, Italy 15-20 21st International Symposium on Stuttgart , Chromatography Germany R. Barnes, Department of Chemistry, Lederle GRC Tower, University of Massachusettes, P.O. Box 34510, Amherst, MA 01003-4510, USA Tel: +1413 545 2294. Fax: +1413 545 4490 Dr. R. Smits, Royal Flemish Chemical Society (KVCV), Working Party on Chromatography, BASF Antwerpen N.V., Central Laboratory, Haven 725, Scheldelaan 600, B-2040 Antwerp, Belgium Tel: +32 3 561 2831. Fax: +32 3 561 3250 Prof. Dr. Willy R. G. Baeyens, University of Ghent , Pharmaceutical Institute, Department of Pharmaceutical Analysis, Harelbekestraat 72, B-9000 Ghent, Belgium Tel: +32 9 221 8951. Fax: +32 9 221 4175 Mrs. Janet Cunningham, Barr Enterprises, P.O. Box 279, Walkersville, MD 21793, USA Tel: +1301898 3772. Fax: +1301 898 5596 Mrs. Pat Nielsen, XVIth International Congress of Clinical Chemistry, P.O. Box 227, Buckingham, UK MK18 5PN Fax: +44 (0)280 6487 Professor Luigia Sabbatini, Euroanalysis IX, Dipartimento di Chimica, Universith di Bari, Via Orabona, 4, 70126 Bari, Italy Tel: +39 80 242020. Fax: +39 80 242026 GDCh-Geschaftsstelle, Abt. Tagungen, Varrentrappestr. 40-42, Postfach 90 04 40, D-6000 Frankfurt am Main 90, Germany Tel: +49 69 791 7358. Fax: +49 69 791 7475
ISSN:0003-2654
DOI:10.1039/AN994190133N
出版商:RSC
年代:1994
数据来源: RSC
|
7. |
Courses |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 138-138
Preview
|
PDF (73KB)
|
|
摘要:
138N Analyst, October 1994, Vol. 119 Courses Date Conference Locat ion 1994 November 7-8 Short Course on LCMS, SFCMS and CEMS Montreux, Switzerland 21-22 Workshop on Micro Total Analysis (pTAS '94) Enschede, The Netherlands December 15-17 Capillary Electrophoresis Short Course Loughborough, UK 1995 April 4-5 Workshop in Chemical Information Retrieval Manchester , UK 4-7 Short Course on Chiral Resolution Rome, Italy 10 Education and Training of Chromatographers London, UK July 17-1 9 Techniques Workshop (Chemometrics) Hull , UK September 6-8 5th Workshop on Chemistry and Fate of Paris , Modern Pesticides France Contact M. Frei-Hausler, Workshop Office IAEAC, Postfach 46, CH-4123 Allschwil2, Switzerland Dr. A. van den Berg, University of Twente, MESA Research Institute, P.O.Box 217,7500 AE Enschede, The Netherlands Tel: +3153 892 691. Fax: +3153 309 547 Mrs. S. J. Maddison, Department of Chemistry, Loughborough University of Technology, Loughborough, Leicestershire, UK LEll 3TU Tel: +44 (0) 509 22575. Dr. M. P. Coward, Chemistry Department, UMIST, P.O. Box 88, Manchester, UK M60 1QD Tel: +44 (0)61 200 4491. Fax: +44 (0)61228 1250 Dr. S. Faniti, CNR, Istituto di Cromatografia, C.P. 10, I 00016, Monterotondo Scalo, Roma, Italy Fax: +39 6 906 25 849 Dr. D. Simpson, Analysis for Industry, Factories 2/3, Bosworth House, High Street, Thorpe-le- Soken, Essex, UK C016 OEA Tel: +44 (0) 255 861714. Fax: +44 (0) 255 662111 Dr. M. J. Adams, School of Applied Sciences, University of Wolverhampton, Wulfruna Street, Wolverhampton, UK WV1 1SB Tel: +44 (0) 902 322141. Fax: +44 (0) 902 322680 Professor M-C. Hennion, ESPCI, Labo. Chimie Analytique, 10 Rue Vauquelin, 75005 Paris, France Entries in the above listing are included at the discretion of the Editor and are free of charge. If you wish to publicize a forthcoming meeting please send full details to: The Analyst Editorial Office, Thomas Graham House, Science Park, Milton Road, Cambridge, UK CB4 4WF. Tel: +44 (0)223 420066. Fax: +44 (0)223 420247.
ISSN:0003-2654
DOI:10.1039/AN994190138N
出版商:RSC
年代:1994
数据来源: RSC
|
8. |
Papers in future issues |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 139-140
Preview
|
PDF (249KB)
|
|
摘要:
Analyst, October 1994, Vol. 119 139N Future Issues will lnclude- Confirmatory Assay for the Simultaneous Detection of Five Penicillins in Muscle, Kidney and Milk using Electrospray Liquid Chromatography-Mass Spectrometry-W. John Blanchflower, S. Armstrong Hewitt and D. Glenn Kennedy Automated Extraction of Acetylgestagens from Kidney Fat by Matrix Solid Phase Dispersion-Johan Rosen, Karl-Erik Hellenas, Paulina Tornqvist and Paula Shearan Multi-residue Analysis for Beta-agonists in Urine and Liver Samples Using Mixed Phase Columns with Determination by Radioimmunoassay-Michael O’Keeffe, S. Collins and Malcolm R. Smyth Effect of Potassium Iodide on Reducing the Adsorptive Interference of Surfactants and Organics in the Determination of Lead and Cadmium in Environmental Samples by Differen- tial-pulse Anodic Stripping Voltammetry-R.S. Barratt and Y. Feng Internal Quality Control of Analytical Data-Analytical Methods Committee Spectrophotometric Determination of Uranium(rv) with Thorin and N-Hydroxy -N, N’-dipheny lbenzamidine-R. K. Mishra and Neena Nashine Ultraviolet Derivative Spectrophotometric Determination of Saccharin Artificial Sweeteners-Cristina D. Vianna-Soares and Jorge L. S. Martins Determination of Volatile Phenols by a Flow Injection Chemiluminescent Quench Method-Hui-Sheng Zhuang, Fan Zhang and Qiong-E Wang Determination of Trace Amounts of Acetic Acid Anhydride by Fluoride Isoconcentration Using a Cell Without a Liquid Junction Patential-M. R. 0. Karim and N. Z. Ahmed Design of a Primary Amine-selective Optode Membrane Based on a Lipophilic Hexaester of Calix[6]arene--Wmg Hong Chan, Albert Wai Ming Lee and Kemin Wang Optical Oxygen-sensing Materials Based on the Room-tem- perature Phosphorescence Intensity Quenching of Immobi- lised Erythorsin B-Marta Elena Dim-Garcia, Nieves Velasco- Garcia and Rosario Pereiro-Garcia Simultaneous Determination of Molybdenum and Tungsten Without Pre-separation by a Flow-injection System-Renmin Liu, Daojie Liu, Ai-Ling Sun and Guihua Liu Determination of Aflatoxin B1 in Agricultural Commodities by Time-resolved Fluorimmunoassay and Immunoenzy- mometric Assay-M.A. Bacigalupo, A. Ius, G. Meroni, M. Dovis and E. Petruzzelli Capillary Zone Electrophoresis After Complexation With Aminopol ycarbox ylic Acids-A. R. Timerbaev, Olga P. Semenova and Gunther K. Bonn Comparison of Sample Preparation Methods for the Spectro- photometric Determination of Phosphorus in Soils and Coal Fly Ash-Johanna M.Smeller Gas Chromatography-Negative-ion Chemical Ionization Mass Spectrometry of Hydrolysed Human Urine and Blood Plasma for the Biomonitoring of Occupational Exposure to 4,4’-Methylenedianiline-Per Brunmark, Marianne Dalene and Gunnar Skarping Selenium Speciation-a Flow Injection Approach Employing On-line Microwave Reduction Followed by Hydride Genera- tion-Quartz Furnace Atomic Absorbtion Spectrometry- Steve J. Hill, Les Pitts and Paul J. Worsfold Comparison of Reflux and Microwave Oven Digestion for the Determination of Arsenic and Selenium in Sludge Reference Material Using Flow Injection Hydride Generation and Atomic Absorption Spectrometry-Rajananda Saraswati, Thomas W.Vetter and Robert L. Watters Jr. Selective Piezoelectric Sensors Using Polymer Reagents- T. C. Hunter and Gareth J. Price Determination of Cadmium and Lead in Vegetables After Activated-carbon Enrichment by Atomic Absorption Spec- trometry-seref Giicer and Mehmet Y h a n Specification of Colour Changes of Complexometric Indica- tors in the Titration of Zirconium with Ethylenediaminete- traacetic Acid-K. M. M. Krishna Prasad, P. Vijayalakshmi and C. Kamala Sastri Spectrophotometric Determination of Silver and Gold with 5-( 2,4-Dihydroxybenzylidene)rhodanine and Cationic Surfac- tants-M. T. M. Zaki, F. M. El-Zawawy, M. F. El-Shahat and A. A. Mohamed , COPIES OF CITED ARTICLES The Royal Society of Chemistry Library can usually supply copies of cited articles.For further details contact: The Library, Royal Society of Chemistry, Burlington House, Piccadilly, London W1V OBN, UK. Tel: +44 (0)71-437 8656. Fax: +44 (0)71-287 9798. Telecom Gold 84: BUR210. Electronic Mailbox (Internet) LIBRARY@RSC.ORG. If the material is not available from the Society’s Library, the staff will be pleased to advise on its availability from other sources. Please note that copies are not available from the RSC at Thomas Graham House, Cambridge.Analyst, October 1994, Vol. 119 139N Future Issues will lnclude- Confirmatory Assay for the Simultaneous Detection of Five Penicillins in Muscle, Kidney and Milk using Electrospray Liquid Chromatography-Mass Spectrometry-W. John Blanchflower, S.Armstrong Hewitt and D. Glenn Kennedy Automated Extraction of Acetylgestagens from Kidney Fat by Matrix Solid Phase Dispersion-Johan Rosen, Karl-Erik Hellenas, Paulina Tornqvist and Paula Shearan Multi-residue Analysis for Beta-agonists in Urine and Liver Samples Using Mixed Phase Columns with Determination by Radioimmunoassay-Michael O’Keeffe, S. Collins and Malcolm R. Smyth Effect of Potassium Iodide on Reducing the Adsorptive Interference of Surfactants and Organics in the Determination of Lead and Cadmium in Environmental Samples by Differen- tial-pulse Anodic Stripping Voltammetry-R. S. Barratt and Y. Feng Internal Quality Control of Analytical Data-Analytical Methods Committee Spectrophotometric Determination of Uranium(rv) with Thorin and N-Hydroxy -N, N’-dipheny lbenzamidine-R.K. Mishra and Neena Nashine Ultraviolet Derivative Spectrophotometric Determination of Saccharin Artificial Sweeteners-Cristina D. Vianna-Soares and Jorge L. S. Martins Determination of Volatile Phenols by a Flow Injection Chemiluminescent Quench Method-Hui-Sheng Zhuang, Fan Zhang and Qiong-E Wang Determination of Trace Amounts of Acetic Acid Anhydride by Fluoride Isoconcentration Using a Cell Without a Liquid Junction Patential-M. R. 0. Karim and N. Z. Ahmed Design of a Primary Amine-selective Optode Membrane Based on a Lipophilic Hexaester of Calix[6]arene--Wmg Hong Chan, Albert Wai Ming Lee and Kemin Wang Optical Oxygen-sensing Materials Based on the Room-tem- perature Phosphorescence Intensity Quenching of Immobi- lised Erythorsin B-Marta Elena Dim-Garcia, Nieves Velasco- Garcia and Rosario Pereiro-Garcia Simultaneous Determination of Molybdenum and Tungsten Without Pre-separation by a Flow-injection System-Renmin Liu, Daojie Liu, Ai-Ling Sun and Guihua Liu Determination of Aflatoxin B1 in Agricultural Commodities by Time-resolved Fluorimmunoassay and Immunoenzy- mometric Assay-M. A.Bacigalupo, A. Ius, G. Meroni, M. Dovis and E. Petruzzelli Capillary Zone Electrophoresis After Complexation With Aminopol ycarbox ylic Acids-A. R. Timerbaev, Olga P. Semenova and Gunther K. Bonn Comparison of Sample Preparation Methods for the Spectro- photometric Determination of Phosphorus in Soils and Coal Fly Ash-Johanna M. Smeller Gas Chromatography-Negative-ion Chemical Ionization Mass Spectrometry of Hydrolysed Human Urine and Blood Plasma for the Biomonitoring of Occupational Exposure to 4,4’-Methylenedianiline-Per Brunmark, Marianne Dalene and Gunnar Skarping Selenium Speciation-a Flow Injection Approach Employing On-line Microwave Reduction Followed by Hydride Genera- tion-Quartz Furnace Atomic Absorbtion Spectrometry- Steve J.Hill, Les Pitts and Paul J. Worsfold Comparison of Reflux and Microwave Oven Digestion for the Determination of Arsenic and Selenium in Sludge Reference Material Using Flow Injection Hydride Generation and Atomic Absorption Spectrometry-Rajananda Saraswati, Thomas W. Vetter and Robert L. Watters Jr. Selective Piezoelectric Sensors Using Polymer Reagents- T. C. Hunter and Gareth J. Price Determination of Cadmium and Lead in Vegetables After Activated-carbon Enrichment by Atomic Absorption Spec- trometry-seref Giicer and Mehmet Y h a n Specification of Colour Changes of Complexometric Indica- tors in the Titration of Zirconium with Ethylenediaminete- traacetic Acid-K. M. M. Krishna Prasad, P. Vijayalakshmi and C. Kamala Sastri Spectrophotometric Determination of Silver and Gold with 5-( 2,4-Dihydroxybenzylidene)rhodanine and Cationic Surfac- tants-M. T. M. Zaki, F. M. El-Zawawy, M. F. El-Shahat and A. A. Mohamed , COPIES OF CITED ARTICLES The Royal Society of Chemistry Library can usually supply copies of cited articles. For further details contact: The Library, Royal Society of Chemistry, Burlington House, Piccadilly, London W1V OBN, UK. Tel: +44 (0)71-437 8656. Fax: +44 (0)71-287 9798. Telecom Gold 84: BUR210. Electronic Mailbox (Internet) LIBRARY@RSC.ORG. If the material is not available from the Society’s Library, the staff will be pleased to advise on its availability from other sources. Please note that copies are not available from the RSC at Thomas Graham House, Cambridge.
ISSN:0003-2654
DOI:10.1039/AN994190139N
出版商:RSC
年代:1994
数据来源: RSC
|
9. |
Tutorial review. Object oriented programming on personal computers |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 2149-2160
Richard G. Brereton,
Preview
|
PDF (1922KB)
|
|
摘要:
Analyst, October 1994, Vol. 11 9 2149 Tutorial Review Object oriented Programming on Personal Computers Richard G. Brereton School of Chemistry, University of Bristol, CantockS Close, Bristol BS8 1 TS, UK A historical introduction to the development of laboratory computing and object oriented programming (OOP) is given, with an emphasis on the scientific user. General features of OOP such as objects, classes, hierarchies, inheritance, encapsulation, messages and events, hiding, revealing and enabling objects and dynamic creation of objects are described, illustrated by screen layout and control. Two major programming environments, namely Toolbook and Visual BASIC are described. The importance of attaching conventional, numerical, code to objects is discussed, and the particular needs of numerically and graphically intensive laboratory scientific computing are emphasized.An example of the potential application to analytical chemistry, namely in display of diode-array HPLC data, is illustrated. Recommendations include balancing OOP with numerical programming. Keywords: Laboratory computing; object oriented programming; software; personal computer Introduction Over the last few years, there has been a revolution in laboratory computing. Benchtop microcomputers are stan- dard in all modern laboratories, and instrumental data are rarely captured directly, but generally via analogue-to-digital converters (ADCs). Coupled with increased acceptance of the computer interface there has been a revolution in microcom- puter architecture. A 486 computer, with 4 Mb RAM, a 200 Mb hard disk, integrated maths-coprocessor and super VGA (SVGA) monitor, is routinely available and modestly priced, and has the power of a departmental computer of 5 years ago.Hand-in-hand with the phenomenal explosion in cheap computing power has come a revolution in the programming methods. Historic Computing In the 1950s, computers were primarily regarded as large calculating machines. Languages were oriented towards per- forming repetitive calculations, and users were primarily programmers, normally mathematicians. Major users came from defence establishments rather than analytical labora- tories. Most computers were large mainframes. Issues of user-friendliness or increasing the user base did not occur. Early programmers used machine code, assembler and languages such as FORTRAN IV, all of which required some appreciation as to how the computer operates and an ability to formulate problems in a manner directly interpre- table by the computer.Large, multiple-user operating systems such as JCL (Job Control Language) were also developed. Input in the form of punched cards and paper tape were standard features, and language specifications were often woven around these limitations. In the 1960s and 1970s, vast numerical subroutine libraries in areas such as crystallography, quantum mechanics and spectral simula- tions were developed. Despite the widespread change to microcomputers and the evolution of programming lan- guages, users still wish to access these old libraries. So, although the earliest languages of FORTRAN, ALGOL and COBOL were rapidly superseded by a vast number of new languages in the 1970s, some, although excellent theoretically, such as PU1, have not survived because of limited compatibility with previous languages. Most often, what has happened is that existing languages and approaches have evolved almost beyond recognition but, nevertheless, keep compatibility with earlier versions.BASIC is a good example, originally developed as a lan- guage for instructing students in colleges, but now one of the most widespread and well supported professional lan- guages. Parallel to this evolution in programming languages is an evolution in hardware. Probably the biggest first step was the replacement of punched card and paper tape input and lineprinter output with VDU (visual display unit) and teletype input/output that took place in the late 1970s and early 1980s.This allowed the user enormously enhanced power for interacting with the computer, and sparked off the use of large computers for word processing and for graphics. Early word processing packages were not on microcomputers, but on mainframes, and special text process- ing languages were developed, often involving the insertion of sophisticated commands into text which were then interpreted by the computer as formatting statements. Microprocessors In the early 1980s, another revolution, that of the micro- processor, was taking place. Interestingly, many of the present uses of micros were not predicted in the early days. The use of micros as machine interfaces, often linked to a more powerful processor, was common, as was the use of micros as terminals to larger machines.The early micros such as the BBC or the SIRIUS often had elegant operating systems and program- ming environments but were substantially limited in memory and power. Parallel to the introduction of micros was the introduction of word processors. Initially companies such as Xerox marketed stand-alone word processors aimed primarily at secretaries, and there was a clear distinction between the administrative and scientific market. The word processors aimed to be easy to use but very limited in facilities. The merging of user-friendliness and scientific needs came about for several reasons. The first was the dramatic fall in the price of microprocessors.This permitted the development of the personal computer (PC), whereby an individual in hidher2150 Analyst, October 1994, Vol. 119 office, home or laboratory, possessed a micro, rather than sharing a departmental or office one. The second was the launch of the IBM PC in the early 1980s. This computer was not particularly powerful, and used an archaic operating system called DOS, orginally based on an earlier system called CPM. However, IBM’s marketing strategy was very strong, and another major advantage was the rapid development of software packages for the operating system. Hence IBM PCs and their clones rapidly dominated the market, and by the mid-1980s were the standard personal computer. Despite this, IBM PCs and their clones were not very easy to use, and superior operating systems, such as that deve- loped for Apple, allowed the user far more flexibility. Although Apple Macintoshes, whose operating systems were based on utterly different principles, and a long pedigree, their penetration into the market was less intense initially.However, the use of Apples as word processors and for various other reasons such as desktop publishing and good graphics eventually became established and, when pricing policy changed towards the late 1980s, Apple Macin- toshes were strong competitors to IBM PCs and their clones. Mouse- and Icon-driven Software Although mouse- and icon-driven software was first marketed by Apple, these ideas had a common vintage in the early 1980s and, in parallel, a joint venture between Microsoft and IBM was set up to try to develop a similar approach for IBM PCs.In computing, ideas are often many years ahead of prototypes and open-market products. The success of the Apple provided further impetus for improvement of the IBM PC operating environment. Most users strongly favour visual, rather than keyboard-driven, software. Early computers were controlled by typewriter-like keyboards. Many big firms employed punched card operators who were, in practice good typists, to enter programs and data, but the programmers had limited direct contact with computers. As direct input became more popular, control was still strongly oriented around typewriter- like input, whether as part of a teletype or the keyboard to a microprocessor. This approach appeals to people with analy- tical minds, who tend to make successful programmers and mathematicians, but not the majority of the population, whose minds are more synthetic and who naturally pick up skills, such as driving a car, or learning to use video games, without first understanding technical details.Secretaries, draughts- men, graphic designers, video editors and the like outnumber mathematical scientists manyfold, and provide a much larger user base. Even among scientists, many struggle at numerical and computational concepts, but find other forms of control much easier. Simple visual control using mice, press buttons and the like is a simple and natural approach to computer control. A car, for example, has very few controls, but can perform extremely complex functions, navigating its way around complex objects and roads.Far more people can drive cars than can write programs in FORTRAN. The car drivers represent a larger market than the FORTRAN programmers. Hence, as IBM’s market share flagged, relative to Apple, more emphasis was placed on a simple window-type environ- ment. The original collaboration between IBM and Microsoft broke down, and there were two separate developments. Windows, released by Microsoft, was originally, a visual ‘add-on’ to DOS. As such, it had many limitations. The user has to have some understanding of keyboard-type DOS commands, so could not entirely escape this environment, in contrast to the Apple user. Major problems of memory management still remained: this is especially important for scientific applications, where large arrays of data (e.g., a diode-array chromatogram) have to be handled, quickly, using available memory effectively.The untrained program- mer would have access to only 640K of memory even if the computer had several megabytes available, although, of course, with a careful knowledge of the system, these problems can be overcome. IBM, in parallel, developed the OS/2 operating system. This is technically a superior system, and overcomes many of the problems of DOS, but because of considerable bugs in the system, because it was released later and because many less applications were developed for this operating system, it has not gained the popularity of Windows. At the time of writing, Microsoft is probably the largest software house in the world and Windows-type applications are certainly much more widespread than those for OS/2.The 386 and 486 PCs now routinely come with the Windows operating system. In this paper, we shall restrict examples to software that runs under Windows. Object Oriented Programming A major advantage of Windows is that it is an ‘object oriented’ operating system. This approach is far more natural to the user. With the effective use of object oriented programming (OOP), it is possible to develop many ideas and applications more rapidly and more naturally than via conventional programming. It is also possible to provide the user with a far more user-friendly interface at minimum effort. The old concept of a computer program is a ‘linear’ routine, starting at the beginning and ending at the end.The user enters the first statement of the main routine and is drawn along the program, often making a number of choices, until the program finishes. This approach contrasts with that of a word processing package: the screen clears and the user is generally given a choice of options and a clear portion of the screen, which he/she can use in any combination and any order, and make a choice at any time to leave the program and save the text. He/she can also start up the program and enter text at any future stage: the ‘program’ has remembered all previous user input and commands, as reflected in the resultant file on disk. OOP does not fix the order in which the user performs operations. In Windows, a user is confronted with a screen with various icons, menus and windows and he/she can decide to click any part of the screen at any time and in any order (if the ‘object’ is visible).In DOS, the user types a series of commands in a defined order, mimicking a conventional program. Hence the widespread use of Widows and related operating systems introduces users in an intuitive way to object oriented computing. The widespread acceptance of these systems over conventional approaches signals a revolution in how people interact with and use computers. Object orientation, though, is also a fundamental approach to programming developed over many years by computer scientists. It is intuitively easy to conceive of an object. Consider, for example, a square. This square could be of any size.It has many properties, such as the position of its centre on the screen, or the length of its sides, or its pictorial representation, or its area, or its perimeter. It is natural to think of a square, and rely on the computer to calculate the properties of the object automatically. All squares are members of a general class of objects called ‘squares’. A specific square is a specific object. Contained within each class member is information about the nature of the specific objects, such as, in the example of squares, the position on the screen, the area and perhaps information relating to its presentation (e.g., colour). OOP deals, pri- marily, with objects. Contrary to impressions given by some, OOP has been around for very many years. In computer science, it is often hard to define the birth of an idea.Is it the first paper thatAnalyst, October 1994, Vol. 11 9 2151 proposes an idea, or is it the first, small prototype, implemen- ted by a graduate student? Or does a formal language have to be defined (this may be many years before the language is generally available)? Finally, does a public domain package have to appear in the marketplace? A very early approach to OOP was Simula 67, developed by the Norwegian Com- puter Centre in 1967, used for simulations. The language Smalltalk, developed by Xerox, was also available in the 1970s and many original ideas of OOP were first developed using this package. For many years OOP was of consider- able scientific interest, but public acceptability only comes when there are generally available approaches that can be easily accessed and implemented.The public acceptance of OOP has depended very strongly on user-friendly implementations under Windows and on the Macintosh. C++’ In the mid-l980s, the object oriented language C++ was developed. This is an extension of the programming language C, which has been much used for developing computer systems over at least a decade, once closely linked to the Unix Operating System. Because of the growth of interest in C, several companies took on the task of developing C++ compilers, with associated documentation, and over the last 3 4 years C++ has been of major interest to computer scientists, many of whom come across object orientation first through C+ +. It is a mistake, however, to equate OOP with C++ and OOP will probably develop long after C++ disappears.A problem with C and the associated C+ + is that the languages are hard to learn, requiring a strong understand- ing of how a computer works. A key to C++ is the development of classes of objects. Large teams of programmers can develop ‘class libraries’. These have some analogies to conventional subroutine lib- raries in languages such as FORTRAN, but a class is more than a subroutine. It could, for example, be a graph, a vector or a window. Classes are discussed in greater detail under Classes. Over the past few years, substantial class libraries have been established, allowing the definition of a large number of basic objects. It would, of course, be possible to develop very powerful scientific software using C+ + , but much depends on the time and effort that can be put into any individual project.A team of ten programmers wishing to develop a major public domain software product such as a graphicalhatistical pack- age, with a market in the tens or hundreds of thousands of copies, may well wish to work in C+ +. Objects and classes can be highly customized and so modular and very robust software can be constructed. However, constructing each class and giving it properties can take time. Consider constructing a class of vectors. In a conventional programming language, it takes a few lines to multiply two vectors together. If the size of the vectors is wrong, then the compiler gives an error message and the user or programmer has to change the code or operation.All this error checking can be performed internally in a C+ + class of vectors. It is also possible to define operators such as vector multiplication in C+ +, so customizing specific operators between class members yet setting up the class and documenting it may be several day’s work or more. Setting up a hierarchy of 20 classes may be many months’ work, with only a marginal effect on the efficiency of developing software subsequently, unless the frequency of use of the classes is large. If, for example, it takes 5 min to write a loop to multiply two vectors together (or develop a FORTRAN subroutine), but 2 min to multiply two classes together, then this saving of 3 min programmer time per vector multiplica- tion must be balanced against many hours defining the class of ‘vectors’.If a team of programmers is constantly writing software involving the multiplication of vectors, then this approach may save time, in the long run. For an individual programmer, it is unlikely that the time and effort spent on developing class libraries are worthwhile in most practical scientific situations. There are virtues in the use of C++ for system develop- ment, for large projects, and as an instructional language, and much recent literature on OOP does, indeed refer to C++. There can be immense benefits in producing classes of, for example, graphs, menus, scrollbars, etc., that may be employed by millions of users, but most practical scientific developments are on a smaller scale. Software development must take into account the time required to produce working code, especially in a scientific environment.A recent development, called Visual C, has made C++ programming somewhat easier, but is likely to be superseded by Visual BASIC. Windows Programming2 Parallel to the spread of C++ has been the development of Windows, not only as an operating system but also as a powerful programming environment. Most Windows pro- grams are written in C with specialized Windows commands added on. Windows has many important features. The key is that the screen is divided into regions, defined by a class of objects called windows. Hierarchy of objects is important: this is discussed in greater detail under Hierarchies and Inheri- tance; however, the main concept is that some objects are derived from and, therefore, relate to other objects.A ‘parent’ window normally has several ‘child’ windows; these are small windows often within a large window, which may, in turn, be another window with similar properties to the parent, but also may be a specialized object with various properties: dialog boxes, scroll bars, lists and icons are all examples of child windows. Most events (see Messages and Events), such as moving a mouse or typing at a keyboard, are captured by these child windows. For example, a child window may display a list of files. Once the user has selected the required file, by moving the mouse down the box and selecting the appropriate filename, the child window often disappears, but the filename is passed on to the parent. Child windows do not always have to be physically enclosed in a parent window, but may be moved to other portions of the screen, for clarity.Child windows do, though, contain information about their parent window, so control and/or information passes up the hierarchy if necessary. The main methods for controlling applications in windows are menus, buttons and dialogue boxes. It is important to distinguish between an image or icon and an actual object. Consider, for example a ‘button’. In its simplest form, this is a rectangular, shaded, region of the screen. When the mouse enters this region, and is depressed, an event occurs, which then activates code buried within the button. One property of a button is a caption, which can be displayed on the button. A caption might be ‘PCA’ which could indicate that depressing the button results in a principal components analysis calculation. However, an alternative would be to associate the button with a picture. This could be produced in any graphics package-the Windows operating system comes with a ‘Paintbrush’ facility, but there are also a very large number of professional drafting packages.The picture is placed on top of the button. Next, imagine that the caption is made invisible and the colour of the button is the same as the colour of the parent window. All the user sees is the picture, or icon, and it appears to the user that pressing the icon (or the immediate area next to it), activates the PCA program. The picture, itself, is not an2152 Analyst, October 1994, Vol. 119 object, any more than the text on the button, but is associated with the button (it is, of course, also possible to produce picture objects but this is conceptually different to simply using a picture to label an object).In practice, most of the low-level Windows programming has already been done, and many modern packages such as Visual BASIC and Toolbook (see below) present the pro- grammer with ready defined objects. Hence there is now very little need for programming in Windows unless a very specialized package is required. A few years ago, of course, such packages were not available, and Windows programming was more common. The Analytical Chemist’s Needs Very few analytical chemists need to know how to program in C++ or Windows, because most of the interface has already been taken care of in packages such as Visual BASIC and Toolbook.Although C+ + and Windows are immensely important environments for programmers developing new systems and have a continuing importance, the analytical chemist is working at a ‘higher level’ in the hierarchy. An important principle of OOP is that there is a hierarchy of objects and programming environments. At the bottom of the hierarchy comes the direct machine interface and at the top level highly packaged software. The programmer should not need to know details of programs at a lower level in the hierarchy. When designing software, it is important to determine which level of the hierarchy the programmer is operating. There is no requirement for the analytical chemist to develop graphics programs from first principles, or to understand the underlying principles of memory manage- ment.He/she wishes to add an extra layer above Windows. Modern OOP is an exceptionally powerful approach that can revolutionize the way analytical chemists think about and program computers. The emphasis in this paper will be on implementations on PCs under Windows, although from the computer scientist’s point of view, there is no specific link between OOP and Windows. There will be emphasis on approaches more likely to appeal to the chemist rather than the computer scientist, hence the description of C+ + is limited, despite the consider- able interest in the specialized computing literature. General Principles of OOP394 Objects One of the first difficulties is to define what objects are.In the theoretical computer science literature there is much discus- sion of what is the true definition of an object and what is true OOP. It is probably unnecessary, at first, to read this literature, and an understanding is best gained by hands-on programming. However, before designing object oriented software, it is essential to try to determine what objects should be used, in advance of developing any code. Two examples will suffice. The first is to design the screen layout for a simple program. We might want a command button which starts an action, a menu which selects various options and a graph on the screen. Pressing the button then opens up the menu, which allows the user to make a decision, which performs a calculation and plots the graph. Before writing the program, it is important to design the screen (Fig.1). Three objects can be placed on the screen, each with its own properties. In conventional pro- gramming, the ‘control’ or input/output is often the last thing to be thought of, in contrast to OOP. A second example is performing principal components analysis (PCA) on a set of spectra. One guideline is to write out the desired procedure in words, and pick out keywords. A short description is as follows. PCA is performed on a set of spectra which then yields eigenvalues and scores and loadings. Four objects can be set up, each with its own properties. Classes of objects can be characterized by their properties. Associated with each type of object is a property list. For example, we may wish to define the properties of the classes of objects defined above. Each set of spectra may have the following properties: (1) the number of spectra in the set; (2) the wavelength resolution of the spectra; (3) the intensity units; and (4) the number of wavelengths in each spectrum.Eigenvalues can have certain properties: (1) whether the original data were mean-centred or not (variance or sum of squares); (2) whether the eigenvalues are a pure sum of squares, divided by the number of objects or divided by the number of objects -1; and (3) which eigenvalue (each principal component has an eigenvalue attached to it). Similar property lists can be attached to loadings and scores. These properties are the key to the interpretation of each class of objects. An eigenvalue is a number, but it is of very little use telling the user of a package that the value of an eigenvalue is 1.76.This number must be interpreted in the context of the original problem, and the properties of the eigenvalue are essential for meaningful understanding of the significance of each individual number. Hence, associating properties with objects strongly aids interpretation. OOP emphasizes program design and setting up structures that the user really wants. It is hard to write object oriented programs without first having an overview of why the program is being written, and so understanding the problem in general, first. It also emphasizes interaction with the outside world. The user of software wants to provide the software with information and to obtain information from the program.This ‘interface’ with the user is often regarded as of secondary importance to conventional programmers, yet in practice, the acceptance of computers as essential tools depends almost entirely on this interface. Classes Classes are groups of objects with similar properties. Such concepts are easy to understand. A class may be a graph or a menu or a button or even a vector or a principal component. The specific occurrence of an individual class member is an object. In modern OOP, this is an immensely important feature. In languages such as Visual BASIC, the programmer only has to point to a class type using a mouse and an icon, drag it to the A \ i B / D Fig. 1 screen: B, button; C, menu; D, graph. A typical layout of a screen, showing various objects.A,Analyst, October 1994, Vol. 11 9 2153 desired position on the screen and, if required, define properties such as colour, shape and reaction to the user. Hierarchies and Inheritance It is possible to construct hierarchies both of classes and of objects. A typical screen may consist of the background, a number of windows and, within each window, a number of icons (Fig. 2). The background is a higher member of the hierarchy to the windows, and the lowest members are the icons. It is possible to rearrange the information in a tree structure (Fig. 3). When the user interacts with the computer, he/she may move a mouse around various areas of the screen and then depress the mouse. The actions taken will depend where the mouse is on the screen. In a hierarchical environment, events lower down the hierarchy will only apply to objects immediately at and below the level of the event.A mouse centred on an icon will apply only to the icon, whereas events activated from the window will influ- ence all objects within the window. Setting up such hie- rarchy of objects is essential for control of the software. It is also possible to set up a hierarchy of classes. In Fig. 4, we illustrate a hierarchy where the top level is a matrix, the next a vector and the next scores and loadings. In this case properties are ‘inherited.’ A matrix has several general properties, such as dimensions and number of datapoints, that are shared by vectors. In addition, vectors have their own specialized properties, as do scores and loadings, which have further properties.Inheritance of properties is possible in hierarchies. In practice, the idea of hierarchies of objects and inheritance are most used in languages such as C+ + , but less useful in Visual BASIC or Toolbook as described below, where hierarchies are generally of objects as opposed to classes. \ / I I I Fig. 2 A hierarchical organization of objects on the screen, with (A) the background a higher level in the hierarchy to (B) the windows, which, in turn are at a higher level to (C) the icons. I Background I Window 1 Window 2 Window 3 I Fig. 3 Organization of Fig. 2 as a hierarchical tree diagram. Encapsulation This is another essential concept of OOP. Each object has information and properties ‘private’ to the outside world. If a button is displayed on the screen, the user is unlikely to be interested in the inner workings of the button.Hisher main interest is in obtaining actions and information when he enters or depresses the button on the screen. Hence pro- grams can be incorporated within an object. The conven- tional programmer is likely to find this concept hard to appreciate. Instead of a main program, code that is in some ways analogous to subroutines or procedures is attached to each object. For example, an icon can be placed on a screen. A desired action when clicking on the object using a mouse may be to perform a Fourier transform on a pre- selected file. The details of Fourier transformation are not of interest to the user. All that is required is to be able to activate the Fourier transform routines.The object, of course, can also produce new disk files, graphs, diagnostic information and so on. It might be that the user has to go to another object (for example, a graph-plotting object) to obtain this information, in which case messages or data need to be exchanged between objects. However, the code that does this is one intergral part of the object. It is not the only property of the object, which will also have a position on the screen, a colour, a size, a pictorial respresentation and so on, but, for the scientific programmer, it is important to attach numerical code to objects, and to structure most of the programs around individual objects. This should result in highly modular software. The idea of a single large piece of code that starts at the beginning and finishes at the end has disappeared.Messages and Events Control of software is largely via events. An event may involve double clicking a mouse, or typing at a keyboard, or starting to acquire data from an instrument, or finishing performing a PCA calculation, or the passage of 0.1 s. As each event occurs we want to pass information and take action. Events are often interpreted hierarchically. For example, consider the event of clicking a mouse. If there is a hierarchy of icon, window and screen, the event is interpreted according to the ‘lowest level’ of the hierarchy. If the mouse is centred in the region of an icon, it activates the icon. If it is in a window but not an icon it takes actions according to code associated with the window.If the mouse is not in a window, the event of clicking a mouse is interpreted by the background. Each object has event handlers. There are several possible events, such as depressing a mouse, double clicking a mouse, pushing down the button on a mouse, entering the part of the screen with the mouse and typing from the keyboard. With pen entry Scores Loadings Fig. 4 Possible hierarchical organization of chemometric objects.2154 Analyst, October 1994, Vol. 1 I9 computing and touch screen computing, the number of possible events can become large. Some are user controlled, but others may be computer controlled; for example, one object may ‘activate’ another object: if a principal components calculation has finished it may then result in a graph being drawn without user intervention.Some objects do not have associated event handlers. For example, some text may be displayed on the screen. The object simply provides informa- tion and does not react directly to mouse events. In such cases, actions are taken at the first level in the hierarchy to contain an event handler for the appropriate event. For example, in Fig. 5 , action A is taken if an event occurs in object 1, action B for objects 2 and 4 but no action for objects 3,5 and 6. Note that if there is no event handler for a given action at higher levels of the hierarchy, the event is ignored. This is an immensely powerful facility. For example, it can allow certain parts of the screen to accept keyboard text, whereas other parts of the screen are inactive to keyboard input.Messages can also be used to control numerical routines, for example by saying that a given calculation is finished or that the result of a test is negative. Time interval events are particularly useful for numerical calculations. It is distracting to the user if the computer suddenly becomes ‘inactive’ when a large calculation is being performed in the background. A timer allows the computer to perform the calculation for a few milliseconds, then look for keyboard or mouse input and then, if there is no screen input, return to the calculation. Careful use of timers makes programs appear fast. Hiding, Revealing and Enabling Objects For screen-based OOP, this feature allows elaborate pro- grams. A typical program may involve several hundred objects, all with different properties.A screen with all objects simultaneously displayed is hardly likely to appeal to the user. It is normally necessary for the majority of objects to be invisible most of the time. Consider a program to perform simple multivariate analysis on a dataset. It might be desirable that the program starts up with a menu consisting of several choices, such as ‘Open File’, ‘View Data’, ‘Preprocess’ and ‘PCA’. Activating each of these menu items (= objects) reveals further objects. ‘Open File’ might reveal a list of available files. This new file list object is made visible when certain events are activated via the ‘Open File’ object. When a file is selected, the file list can be made invisible again. Objects are normally revealed by events catalysed by other objects.If an object is invisible it cannot be activated by the user. Some OOP packages distinguish objects that are ‘visible’ and objects that are ‘enabled’. If an object is not enabled, although it can be seen on the screen, it cannot be activated. Showing objects, but inactivating them, is a good way of keeping control. For example, there is no point activating ‘View Data’ if ‘Open File’ has not been activated, but it may be a good idea to show the menu item at the top of the screen. 1 6 1 Fig. 5 Objects (1-6) hierarchically organized; event handlers A and B are associated with objects 1 and 4. Scientific software differs from many common applications such as word processing in that there is a need for some events to be performed in sequence. A completely object oriented approach, allowing the user to perform an action in any order, would be impossible.An approach whereby hundreds of variables are stored according to the progress of the calcula- tion is inelegant. An approach whereby various actions are simply turned on and off according to whether they are appropriate or not is much easier to control. In many practical situations it is not always desirable to return to the beginning when repeating a calculation, and very careful software construction is necessary to keep a sensible control mechan- ism. Take the example above, involving the four actions of ‘Open File’ (action A), ‘View Data’ (B), ‘Preprocessing’ (C) and ‘PCA’ (D). As the program is entered, the only sensible action is A. Once A has been performed it is possible to take actions B or C.After choosing a preprocessing option (C), it might be desired to perform PCA (D). If the principal components are not as desired, it might be useful to change the method of data preprocessing (C), and then return to PCA. Action A must always be performed before actions B, C and D. Action B can be performed at any time after action A. Action D must be performed after action C. The easiest way to control this is simply to switch actions, or their related objects on and off, either by hiding them or disenabling them, according to messages received from other objects. Whereas software involving only four objects is relatively easy to control , software involving several hundred objects would soon become confusing without this facility.Dynamic Creation of Objects Sometimes, it might be desirable that software itself creates objects. For example, the user may request several graphs. In advance, it would not be possible to predict how many such graphs. A simple approach might be to define ten graph objects and allow the user to reveal each graph as and when he/she decides to plot a new graph. After the tenth, the user is either stopped or else cycles back to fill up the first graph. An indefinite number of graphs could be created by simply offsetting each successive graph from each other. If the screen fills up, this is the user’s problem. Of course, extra properties could be added to these graphs, for example, allowing the possibility of clearing or hiding any given graph as required.Dynamic creation of objects can have very practical applications in the processing of instrumental data. Consider, for example, multi-stage analysis of chromatographic data, where the later phase involves operations such as integration, factor analysis or PCA. An early step may involve smoothing the data, using Savitsky-Golay filters of various types. The resultant smoothed chromatograms (or sets of smoothed chromatograms) could become objects in their own right. In practice, each object could be collapsed into an icon. If the user wishes to explore the object, he opens it up, by an appropriate action, and then performs various operations on the object. If desired, he could explore two or three such objects simultaneously, arranging the results of the calcula- tions in windows around the screen.Programming Environments General There are various levels of OOP, according to what facilities are required. Theoretically, most languages can support some features of OOP, even early languages such as FORTRAN (the new version FORTRAN 90 is a fairly structured programming environment), so there is no such thing as a single ‘best’ environment. A great deal depends on the needsAnalyst, October 1994, Vol. I I 9 2155 of the user. In this section, two widespread environments are explored. These are chosen as practical environments that are likely to appeal to the analytical chemist who wished to use OOP but not to delve too deeply into how a computer works. T00lb00k596 Toolbook, developed by Asymetrix, is one of several ‘Win- dows construction sets’.Effectively much of the groundwork of moving from Windows to useful objects has already been done by the developers. Toolbook is by no means unique in this way, but is discussed here as a good example of an OOP approach that can be picked up without much prior technical knowledge. Toolbook is primarily used for applications such as multi- media or CAL (computer-aided learning). It is not a very efficient scientific or numerical programming environment. Nevertheless, it can be easily interfaced to numerical routines, as described below, and is an excellent approach for designing screens and interacting with the user. Toolbook works in ‘Author’ and ‘Reader’ levels. Author level allows the user to construct screens and change pro- grams, whereas Reader level simply allows use of the software.It is legally allowed to distribute Reader level software to users without access to the entire Toolbook package, meaning that wide distribution of material is possible. The key to using Toolbook is to organize an application into a ‘book’. Each book consists of a number of pages, in turn normally represented by a screen with various objects on it. Objects are arranged hierarchically (Fig. 6). At the top of the hierarchy is the book. Next comes the background. This is at the back of each page and may, for example, contain a title, an icon and so on. The background appears on each page, and can, in its own right, contain a number of objects. Each page is the next stage in the hierarchy. On the page are objects such as buttons, message boxes, menus and lists.These objects can also be grouped, adding extra levels to the hierarchy: for example, a title, a list, an option box and button could form part of the same group (Fig. 7). The title simply describes the box. The list may, for example, provide a list of files which the user selects, the option box several options, and a button is pressed when the user has completed the selection. It is also possible to enclose groups within groups. This hierarchy organizes event handlers as discussed under General Prin- ciples of OOP, so it is possible to determine the effect of a mouse event or keyboard input anywhere on the screens. For example, a small portion of the screen might be dedicated to typing text such as the name of a file.If the user centres the mouse elsewhere on the screen, he/she does not wish to enter text, and so it is not sensible to enable the keyboard input event handler elsewhere. Only in a small area of the screen is this event handler enabled. Various actions from the user can be made to change pages. There are several ‘special effects’ that move to new pages, such as clearing the screen from the middle, dissolving the # Background ~ 1 ~ ~ 1 Fig. 6 A hierarchical organization of objects in Toolbook. screen and so on. These events can also be used to provide what appears like animation and motion, with slightly different pictures on each page: the user does not know he/she is turning a page in this case. Movement around pages can be a very sophisticated operation.In a conventional book, pages are numbered from beginning to end, and the reader is aware of skipping from one page to another. In Toolbook pages can have names as well as numbers, and it is possible to move to any other page as required. This is an immensely powerful feature, allowing the user to skip back and forth to HELP screens, or to browse through a contents list and so on. The human mind does not naturally obtain information in a sequential manner, but skips around from place to place. Computational packages such as Toolbook permit this, unlike conventional books. There are many ways of organizing a book (Fig. 8). A hierarchy is a natural way of searching topics and subtopics. A decision tree could be very valuable when different approaches to data analysis are attempted.Each option leads to a different page, and so on. A network allows the user to access related topics. Being a high-level package designed for developing good multimedia applications on PCs, Toolbook contains a very wide range of options for changing the appearance of objects. Title List Options 0 0 0 I I Fig. 7 Grouping of objects. Fig. 8 Ways of organizing a book: (a) hierarchical; (b) decision tree; (c) network.2156 Analyst, October 1994, Vol. 11 9 Especially useful is the grouping facility. It is possible to obtain libraries of ‘pictures’. These are often arranged into hierarchical groups. For example, a face might be organized into eyes, nose, hair and so on. Each feature, in turn, has its own properties. Ungrouping a face allows the use to work each main feature rather than a mass of small details. Objects can be changed in size.Photographs can be imported, using, initially, a scanner to obtain a bitmap picture file. These can be organized around the screen as desired. In addition to hierarchical organization of objects, each object has a level of visibility. When revealed on the screen, higher level objects cover lower level objects. So, for example, a graph can be drawn to cover a text box on the screen. This is useful for animation, because an object such as an arrow can appear from underneath another object. It also has important uses in scientific graphing, where some objects are hidden by others. Animation involves moving objects around the screen. Examples of arrows pointing the user in the right direction are frequently encountered.Shrinking and expanding are a common form of animation, useful in scientific graphics. Objects have scripts attached to them. This is code describing what to do when certain events occur. To see or change the code of any object in Author mode, it is only necessary to select the object, using the mouse, and then view its script, using a Toolbook menu command. Readers, of course, cannot view or change the script. A text box opens up, which allows editing, writing, cutting and pasting of script as required. Table 1 illustrates a simple script, which is attached to an object (perhaps a picture that says ‘HELP’ would be appropriate). The script is activated if the mouse enters the object and is double clicked (to handle buttonDoubleCZick).The request command puts up a box on the screen with the appropriate question (Do you wish to obtain further informa- tion?) and two buttons to press (Yes and No). If the Yes button is pressed, the current page is cleared and a page named ‘Help’ is displayed (go to page “HeZp ”). The special variable It is used to indicate input in many circumstances. It does not take long to become acquainted with Toolbook’s scripting language. The language is powerful, and contains full control (e.g., if . . . then . . . else; do loops; increments), graphics control and arrays. One major feature of Toolbook is the ability to ‘record’ actions and convert these into scripts. There are two ways for the Author to change the appearance of an object. The first is, of course, by writing code.The second is to use mouse control and other events to change its properties. For example, a small graph might be selected from a menu of icons. This graph is then moved into the middle of the screen, the size is expanded and the colours are changed. By recording these events, a new line of code is written each time any feature of the object changes. It is then only necessary to stop the recording. A small program is then automatically created which can be pasted into the code of the object. This code can be edited if required, and be made to respond to any desired event. This facility vastly speeds up development of software. The analytical chemistry user will wish to link the user interface provided by Toolbook to programs that perform scientific functions such as displaying graphs, performing PCA, filtering spectra and calculating statistical detection Table 1 A simple Toolbook script to handle buttonDoubleClick “No” request “Do you wish to obtain further information?” with “Yes” or If It is “Yes” then end if go to page “Help” end buttonDoubleClick limits.It is here that intelligent choice of languages becomes important. It is, indeed, possible to perform graphics and arithmetic in Toolbook, which has a fully functional language, able to handle arrays, arithmetic operations, read and write disk files and so on. However, the language is awkward to use, and there are two major problems. The first is that Toolbook takes up a great deal of memory, and there is limited extra memory available for handling very large arrays.More seriously, Toolbook is not a compiled language, so would perform intensive calculations extremely slowly. For simple applications, such as teaching students chemometrics on small example datasets or for doing linear regression on small datasets, Toolbook is probably adequate. For more intensive operations it is best to write the code in another language. The simplest method is to write and compile a program in another language running under Windows, such as FORT- RAN, BASIC, C or Pascal. As will be discussed later, there is a balance between working in a completely object-oriented environment and working with conventional programming languages. A routine to perform PCA in FORTRAN is likely to be fast to develop, easy to implement and efficient.Conventional programming was oriented towards mathe- matics and the historic programming environment was accept- able for Straightforward numerical applications. The program should be compiled under Windows and it is normal to produce a file with a .EXE extension. This type of file is a straightforward runtime program. In a conventional DOS- based environment we might type RUN PCA.EXE on the keyboard to run a PCA program. This can easily be done in Toolbook, also. Table 2 is all the code that is required. When the mouse is depressed (to handle buttonUp) the previously developed program is run (run pca.exe), and then control is passed back to Toolbook. The user does not need to be informed that he/she is running a previously written EXE program: all he/she notices is the Toolbook interface.There are, however, some considerable limitations of this simple approach. The main one is that many packages have their own screen control. Microsoft FORTRAN, although a very powerful language, has its own method of controlling the screen, so if PCA.EXE is a FORTRAN program, what happens is that the Toolbook screen disappears, the FORT- RAN screens take over and only when the program is finished does the Toolbook screen reappear. This happens even if there is no disk input or output. Better interfaces are available for Microsoft C and Visual BASIC and, if there is no screen input or output, it is possible to run programs in these languages without any change to the Toolbook screen: the programs read files, manipulate data and create new files which are then accessed by Toolbook; alternatively, graphs and other output generated from Visual BASIC can be placed appropriately on windows on the Toolbook screen that clear when the user requests it without destroying the over-all screen layout. Hence powerful calculations can be controlled by Toolbook, without needing to write complex numerical routines to manipulate spectra, chromatograms, etc., in Toolbook, so taking advantage of the best of two worlds.Finally, it is also possible to use what are called Dynamic Link Libraries (DLLs). These are mainly oriented towards running C and Toolbook simultaneously. It is possible to develop numerical routines in C that are directly accessed by Toolbook, analogous to extra Toolbook commands.Effec- tively, Toolbook simply calls the subroutines or procedures Table 2 A Toolbook script that runs a scientific program to handle buttonup run pca.exe end buttonup2157 Analyst, October 1994, Vol. 119 from C directly. This approach leads to efficient code, but is more complex to program and it is probably not necessary for the analytical chemist to construct DLLs unless he/she has a very pressing need. Visual BASIP9 Visual programming languages are a clear next step towards the widespread acceptability of OOP. Microsoft has deve- loped visual versions of both C+ + and BASIC. As the latter is likely to be more widely accepted by the laboratory-based chemist, the discussion is restricted to Visual BASIC (VB). The key to VB is to understand how objects and data are organized.There are various ‘units’ in VB. A package consists of several ‘forms’ (which have some analogy to pages in Toolbook), and related ‘modules’ which contain code. A package is defined by a ‘make file’, which contains the names of all forms and modules associated with the package, and some other information such as the start-up form: this corresponds to the screen first presented to the user when beginning the package. It is possible to produce compiled VB code and create .EXE files. These are valuable when transporting software, protecting code and linking to other packages. A form consists of a number of common objects, such as text boxes, grids, buttons, menus and dialogue boxes. The Professional version contains an extensive set of extra object classes, such as the graph object which produces most types of line graphs, pie charts, scattergrams and so on.It also contains three-dimensional controls that lend a nicer ‘look’ to the package. Some useful facilities such as gauges are valuable to the scientific user. In this paper, it is assumed that the Professional version is available. There are a large number of common events in VB, most of which are mouse activated. The Click event is very common. If the cursor is over an object and the mouse is depressed, this event is activated. Dragdrop involves using the mouse to drag (move around) an object. Mousemove can permit ‘touching’ of objects to activate events: for example, if a mouse is within the region of an object, events continue to be activated until the mouse leaves the area of the object.Change is useful for objects such as scroll bars. A Gotfocus event occurs when an object gets the focus; this is useful, for example, when moving from one form to another. Each object has a certain number of allowed events, dependent on the type of the object. For example, a button, a menu, graph and a form have different allowable events. Some types of events are common to more than one object. Most objects, for example, accept the Click event. On activating an event (e.g., moving the mouse into the region of an oject and clicking it), a subroutine is then called of the form OBJ-EVENT where ‘OBJ’ is the object name and ‘EVENT the event name. When’ developing software, the programmer is automatically presented with a list of allowed subroutines, corresponding to allowed events.All he/she has to do is select which subroutine is required and then write the code. Consider entering a ‘button’ called ‘correl’ which, when clicked, calculates the covariance between two variables x and y and prints this on the screen. The code for this eventhtton is given in Table 3. For readers with a knowledge of general programming languages, this code is easy to read and can be directly understood scientifically. The main difference between OOP and conventional programming is that the code must be organized as a subroutine and activated by an event. Events do not just result in calculations. They can result in revealing hidden objects (e.g., graphs, further buttons and menus), motion, presenting the user with text and (see below) changing forms.A key feature is that each object also has ‘properties’. These properties are inherent to the object, and involve features such as the colour, font style and shape of the object. Some objects can be used to change the value of parameters. Scroll bars are common examples. These values may also be defined as properties of objects. There are two ways in which properties can be changed. The first is by the developer of the software. He/she can set up these properties when he defines an object. These are the ‘starting’ properties. A second method is as the program is running. An object’s property is referred to as 0bj.Prop by other routines in the program. Table 4 illustrates such an example. As the form ‘Form’ is loaded (this happens when it is called by the overall package), it changes the properties of another object ‘Select’ which is a scroll bar, and sets the maximum and minimum value of Select to 1 and 15.This means that the extreme ends of the scroll bar correspond to these particular numbers. The routine Select-Change occurs when the user scrolls the scroll bar. ‘Select.value’ will be number between 1 and 15. This property is used to define a variable x . Another object ‘Info’ has a caption that provides the value of x . Note that the caption, in itself, is also a property, rather like the title of a graph. It is important to notice how objects can change properties of other objects. Scientific subroutines do not necessarily have to be attached to an object.VB allows for general subroutines. If these subroutines are common only to a form, then they are attached to a form. Sometimes, it is necessary to share subroutines throughout an entire package. These can be stored in modules, which simply contain code. It is possible to build up libraries of subroutines, e.g., graphical and statistical libraries, and include these modules in a new package if and when required. To the scientific programmer, use and declaration of variables is an important topic. For example, a particular - Table 3 Simple Visual BASIC program to calculate covariance Sub Correl-Click ( ) Rem x, y and imax have been defined externally redim mean (1 to 2) as single dim cov as single mean (1) = 0 mean (2) = 0 cov = 0 rem Calculate means for i = 1 to max mean (1) = mean (1) + x(i)/max mean (2) = mean (2) + y(i)/imax next i rem Calculate covariance (not the only way of doing this!) for i = 1 to max next i cov = cov + (x(i) - mean (l))*(y(i) - mean (2)/imax rem Print covariance End Sub print “Covariance of x and y is” & cov Table 4 Two Visual BASIC subroutines to use a scrollbar to select a value between 1 and 15 Sub Form-Load Select.Min = 1 Select. Max = 15 End Sub Sub Select-Change x = Select. Value Info. Caption = “Spectrum number” & x & “has been selected” End Sub2158 Analyst, October 1994, Vol. 11 9 package might be used to analyse IR spectral data. Once the package is entered and the data read (e.g., from disk), the values should be common to a series of objects and subrou- tines. ‘Global’ variables can be declared in a module and are common throughout the package.Variables declared within a form are common only to the form. If there are several forms in a package, these variables are destroyed when leaving the form. Finally, some variables are local only to a specific object, their values are not known to other objects. Sometimes this feature can be immensely useful. For example, a package may be written to analyse a set of spectra. Each spectrum can be an object within a form, and the ‘duplicate’ variables held in memory simultaneously. There are several other ways of storing and declaring variables, a few of which are not implemented in the common version of C. Control arrays or arrays of objects are important features in VB. A user may wish to analyse an indefinite number of spectra simultaneously.It is possible to create an object Spec(U) (which corresponds to the first spectrum: in VB 0 is normally the first number in an array rather than 1 unless specified otherwise) and then create further objects Spec(l), Spec(2) and so on as the program is run. These objects can all have similar properties, yet different data. These objects do not have to be defined in advance of the program. The user can do this, and so control the appearance of the screen, and the number of datasets he/she wishes to look at simul- taneously, as the program runs. One major way of controlling a program is to move between forms. For example, one form may be used to read in data and another perform PCA. Consider setting up a button called PCA on the main form.By entering and clicking this button, the user leaves the main form and enters a new one which performs PCA. Table 5 provides an example of the code for this button. There are only certain allowed events for each type of object. The ‘Click’ event is initiated when the button is clicked [Sub PCA-Click( )]. The first action is to make the new form visible (New.Visible = True). A second window containing the form for the PCA calculation appears. At this stage, this new window will appear behind the old window. Then the focus of control is passed to the new window (New.SetFocus). The new window appears in front of the old window. Finally, the old window disappears (Old. Visible = False). It is not, of course, necessary, to clear the old window, but the screen could soon become cluttered with windows, and the user could move from window to window with a mouse, often causing confusion if certain events have to be organized in sequence.The events are illustrated in Fig. 9. Sometimes ‘customized’ forms can be created to do certain jobs, e.g., manipulating spectra. These forms can be made into small windows that appear on top of larger forms, and are an alternative to constantly skipping from one form to another. To the scientific user, a very important feature of VB is the ability to link conventional numerical routines to objects. Table 5 Visual BASIC code to leave a form Sub PCA-Click ( ) New.Visible = True New. SetFocus 0ld.Visible = False End Sub Table 6 Visual BASIC code to call a routine PRINCOMP when entering form NEW Sub New-GetFocus ( ) End Sub Call princomp Immense resources have gone into developing VB.Although certain aspects are undoubtedly inelegant, it is most unlikely that a rival ‘all-purpose’ object oriented programming envi- ronment will be commercially available in the near future. Hundreds or possibly thousands of man-years go into develop- ing a new programming environment such as VB, with a potential market in the millions, and smaller, but potentially more elegant, approaches are unlikely to obtain the relevant commercial backing. There are, however, some drawbacks to VB. Toolbook offers a much wider range of screen effects. Colours of buttons, methods of clearing screens, very fine effects when objects are overlapping, etc., are better treated in Toolbook, which has been developed as a presentation package.The future world of multimedia, where videos and spoken text will combine with the written word and the computer screen, may depend on these sophisticated effects. Hierarchy and grouping of objects is much less sophisticated than in Toolbook. Another problem with VB is that the compiler is slow, so scientific programs do not always run very rapidly: a typical PCA routine runs about 3-4 times more slowly in VB than in C on a 486 machine. In a few years time, as machines become faster and the VB compiler is improved, this difference is less likely to matter, but a user who requires speed can use a dynamic link library to call, directly, a routine written in, for example, FORTRAN or C.Most sophisticated packages concentrate on input/output and the user interface occupies most programmers’ time, so in a professional environment it may only be necessary to call a very small number of very numerically intensive routines from another language. Example of Potential Application to Analytical Chemistry The potential applications of OOP techniques to analytical chemistry are very large. A simple example will be discussed in this section. It is important to understand that the first step is to design a series of objects and screens. The language that is subsequently used (e.g., VISUAL BASIC, Toolbook) is less crucial, and we shall not provide technical details of the program statements . HPLC with diode-array detection (HPLC-DAD) provides a very rich source of data for the analytical chemist.It is an excellent example of how OOP can help provide an informa- tive display of information. Fig. 9 Steps of leaving and revealing a new form, corresponding to Table 3.2159 Analyst, October 1994, Vol. 119 It is desirable to start with a screen containing a few ‘control objects’. These objects might be in the form of a menu at the top of a screen, some icons or some buttons. A simple screen (Fig. 10) might consist of two menu items at the top, one which allows the user to ‘load a chromatogram’ and the other which ‘exits the program’. The ‘load’ button has a property that when activated it displays a list of files. Code is attached to this button. When the user activates it (e.g., by the mouse being depressed in this portion of the screen), another object (Fig.11) becomes visible. The ‘Exit’ button simply exits the program, probably clearing the screen in the process. Fig. 11 illustrates a typical list of files. The scroll bar on the left moves through this list, if it is too large to be displayed completely on the screen. The user selects a file by double clicking the mouse over a filename. Attached to this ‘event’ is some code. The code first checks if the file is of the correct format. If not, an error message is generated. If it is, the file is opened, and read into the computer memory. The ‘list’ object is then made invisible again, and extra menu items appear (Fig. 12). There are several possible ways of displaying an HPLC- DAD trace. Conventional software is often clumsy and hard to use, so users are often limited in what they try to do.Also, it is often very time consuming to develop conventional methods for graphical display and manipulation of spectroscopic and chromatographic data. Yet today’s analytical chemist can generate huge amounts of data in a very short time, and can benefit immensely from rapid graphical handling of these datasets. Even more useful is to be able to develop, rapidly, software for custom handling of instrumental data in the laboratory. The system described below might take a few days’ programming using a powerful object oriented environmental such as VB, compared with months or even years in the near past. A Fig. 10 A simple screen with (A) two menu items. DIODE ARRAY HPLC DISPLAY PROGRAM lQmm I ‘A Fig.11 Screen displaying: (A) List of files and (B) scroll bar. Fig. 12 Screen displaying (A) three extra menu items. Fig. 12 shows three extra menu items that become visible after a file is selected. ‘Spectra’ involve viewing spectral cross-sections at given times, ‘chrom . ’ chromatographic cross- sections at a single wavelength and ‘overall’ summary graphics such as contour and stack plots. Other facilities, e.g., profiles and integrals, could easily be added. Selecting the ‘spectra’ object has the effect of activating a ‘submenu’ whereby the user is asked whether he/she wishes to draw out a single spectrum or overlay several spectra. We shall follow some possible options after the ‘single’ submenu object is selected (Fig. 13). As soon as the ‘single’ object is activated, a portion of the chromatogram is displayed on the screen.Note that the chromatogram has many properties, for example the vertical and horizontal scales, whether the chromatogram is at a single wavelength or is a sum over several wavelengths or is a profile. These properties can be set by the program or changed by the user. Other properties, such as the position on the screen, the line type and the colour, can also be changed. Note that properties can have chemical significance. In Fig. 14, it is assumed that only a portion of the chromatogram of interest is displayed. It would be possible to have various buttons that control this display. Alternatively, the chromatogram object could be changed using various controls, such as cursor or keystroke or mouse events (e.g., pressing a ‘right-hand cursor’ expands the right-hand scale).It is a simple matter to include an event handler associated with the chromatogram object. The most significant event, however, is one that selects a retention time for the spectrum. This might be controlled by a DIODE ARRAY HPLC DISPLAY PROGRAM \SPECTRA] loVERALLl-1 single OVElrlaV I . . I \ A Fig. 13 Screen displaying (A) submenu. DIODE ARRAY HPLC DISPLAY PROGRAM overlav ’ ” I t I \ I I I A k l / I A Fig. 14 Screen displaying (A) portion of chromatogram. Fig. 15 1; and (C) time of spectrum 1. Screen displaying (A) spectrum 1; (B) properties of spectrum2160 Analyst, October 1994, Vol. 119 mouse. For example, if the mouse enters the area of the chromatogram object, a vertical line is displayed at the position of the mouse (a ‘mouse enter event’).If the mouse is double clicked in the area of the object, a spectral cross- section is then calculated, at that particular time (Fig. 15). It would be possible to organize the screen further to include a portion of the screen where numerical information is stored, e.g., the time of each spectrum. Spectrum 1 is now displayed. Note that it also has properties of its own. For example, the scale of the ‘absorbance’ axis can be changed. Are the vertical and horizontal axes to be labelled? Buttons that allow properties of the spectrum to be changed can be incorporated as part of the display. One major advantage of OOP is that it is possible to create arrays of objects. In conventional programs, arrays of number are very common.For examples, the array x(100) might consist of a set of absorbances at 100 wavelengths. Hence, the value x(57) = 0.63 implies that at wavelength 57 there is an absorbance of 0.63 units. Equivalently, it is also possible to create much more complex arrays. Fig. 16 is the result of selecting three spectra at three different times. Each spectrum is a member of an array [spectrum ( I ) , spectrum (2) and spectrum (3)]. However, these objects have far more proper- ties than just simple numbers. Each spectrum consists of a set of absorbances, with appropriate scales on horizontal and vertical axes, line types, etc. Attached to the objects are properties such as the time at which the spectra were recorded. It is possible to attach other objects such as buttons to the spectra that allow the user to display this extra numerical information if required. The spectra in Fig. 16 can also be rearranged on the screen. Each object can permit a drag event, allowing it to be moved to a different position using the mouse. A print button could permit a custom printout of a single spectrum. A spectrum can be ‘brought to the front’ or ‘sent behind’. In Fig. 16, spectrum 3 is in front and spectrum 1 at the back. Pressing a ‘bring to front button’ on spectrum 2 will place this on top of the other spectra. It should also be possible to make a spectrum invisible; this would dismiss the spectrum from the screen. Note that events activated using one object can change the properties of other objects: making a spectrum invisible could also remove the corresponding vertical line in the chromato- gram object. Conclusion The ideas of OOP have developed over 25 years from an esoteric idea of computer scientists to a very practical tool of all computer users. The widespread introduction of Windows- type operating systems has provided enormous commercial impetus for commercialization of object oriented program- ming tools. OOP is now available to the scientific user without the need to master complex languages such as C+ + . Visual BASIC and Toolbook are two such examples. OOP helps modular program development and emphasizes the user interface. Another major advantage of an object oriented package is that the scientific user can readily explore different datasets and different approaches to data analysis interactively. In the past, it might have taken several minutes or even hours to analyse a given problem via conventional software. It is important, however, that the scientific user keeps the advantages of OOP in perspective. The original computer programming languages were, and still are, excellent tools for numerical programming, and in many instances it would be wasted effort to develop object oriented tools for these tasks. Routines to perform PCA or Fourier transformation are still best developed using conventional programming techniques. Using modern approaches, such as Visual BASIC, it is possible to attach conventional code to objects or modules, using the objects to interact with the user. We are likely to hear much more about object oriented programming in the future, and the analytical chemist should be aware of the principles and advantages of such an approach. 5 6 7 8 9 References Microsoft Cl++ Version 7 Manual, Microsoft, 1991. Petzold, C . , Programming Windows 3. I , Microsoft Press, Redmond, WA, 3rd edn., 1992. Graham, I . , Object Oriented Methods, Adison Wesley, Woking- ham, 1991. Meyer, B., Object-Oriented Software Construction, Prentice Hall, Hempel Hempstead, 1988. Using Toolbook, Asymetrix Beelvue. WA, 1991. Beilby, M., The CTISS File, CTISS Publication, Oxford, vol. 13, 16. Mansfield, R., The Visual Guide to Visual Basic for Windows, Ventura Press, Chapel Hill, NC, 1992. Socha, J . , Learning Programming and Visual Basic with John Socha, Sybex, San Francisco, 1992. Microsoft Visual Basic Programmer’s Guide, Microsoft, 1992. Paper 3105235 D Received September 1, 1993 Accepted April 18, 1994 Fig. 16 Screen displaying (A) spectrum 2 and (B) spectrum 3.
ISSN:0003-2654
DOI:10.1039/AN9941902149
出版商:RSC
年代:1994
数据来源: RSC
|
10. |
Tutorial review. Calculating standard deviations and confidence intervals with a universally applicable spreadsheet technique |
|
Analyst,
Volume 119,
Issue 10,
1994,
Page 2161-2165
J. Kragten,
Preview
|
PDF (793KB)
|
|
摘要:
Analyst, October 1994, Vol. 11 9 2161 Tutorial Review Calculating Standard Deviations and Confidence Intervals with a Universally Applicable Spreadsheet Technique J. Kragten Laboratory of Analytical Chemistry, University of Amsterdam, Nieuwe Achtergracht 166, 1018 WV Amsterdam, The Netherlands A quick and universally applicable spreadsheet method is outlined for the calculation of standard deviations based on the general formula for error propagation: With this method, standard deviations are calculated numerically without violating the condition of mutual independence, with a substantial time gain and with no risk of calculating errors. Satterthwaite’s approximation of the degrees of freedom is a logical extension of the technique with which confidence intervals can be easily established.Direct insight is obtained about the separate contributions of the different error sources. Keywords: Numerical method; error propagation; estimated degrees of freedom; standard deviation; confidence interval Introduction Quality improvement of laboratory results forces us to look critically, more than ever before, at our statistical manipula- tions. The availability of computers avoids time-consuming calculations and permits calculations free from errors. How- ever, there is still no guarantee that the final results are correct. For instance the estimation of standard deviations in error propagation is a field where ‘rules’ are often applied wrongly. It is well known that assumptions have been made in the derivation of these rules and that the underlying conditions have to be satisfied.Too often, however, the conditions are violated unintentionally and mistakes are made. This can be explained as follows. The simple rules sR2 = s,2 + sy* when R = x k y (:)’ = (:)2 + ( ; ) 2 when R R = x / y = xy 1 (1) are well known. They hold only for mutually independent variables. In practice, however, many observations are mutually independent, which makes most people think that the condition of mutual independence is satisfied automatic- ally when the rules are applied, but this is not necessarily true. [For the quantities x , y , etc., the best estimates (the mean values) will generally be substituted in the equation for R . It depends on the inquiry whether for s,, sy, etc., the values of the population or of the mean ( s ~ , sy, .. .) will be used in eqns. The rules in eqn. (1) may only be applied when we are dealing with either an exclusive mix of additions and subtractions or with a combination of just divisions and multiplications. As soon as R is a more complicated function (1) and (2)-1 of x , y , . . ., the simple rules lead to erroneous results. This will be shown with the calculation of the surface of a block: R = 2(lb + bh + hl). Most workers will split R into the parts 16, bh and hl. The rules are applied to these separate parts and the standard deviations of these separate parts are obtained. Eventually the separate parts are summed to obtain R and the simple error propagation rules are applied again to find S R . At this point the error is made: commonly the separate parts of R have some variables in common and hence are mutually dependent.(Use of the word correlation is restricted to covariance between measured quantities. Terms containing the same variable in a mathematical relationship will be called dependent.) The block-surface R = 2(lb + bh + hl) is a good example with the product terms lb, bh and hl sharing 6 , h and 1. Application of the simple rules generates a standard deviation sR which is fl too low. In more complicated cases this factor may even be as large as 10. The only way to get rid of these errors is by using the general eqn. ( 2 ) , in which R is the calculated Result, and x, y , z , . . . are the measured values whose standard deviations and covariances (usually of the mean) are known from the experiments.The correlation coefficients pxY, etc., are defined as covariance (x, y) var ( x ) var (y) P x y = In practice, most cross-product terms vanish because p = 0. There is an aversion to applying eqn. ( 2 ) because of the complexity of its calculations. When eqn. (2) is used in the usual analytical way, we deal with n independent measured values, n differentiations, n + 1 equations and n2 + n substitutions. All these manipulations are time consuming, sensitive to calculation errors and cannot be generalized and automated as they are connected with the specific function R(x, y, z). This can all be prevented by making use of numerically operating spreadsheet programs. In the following, a standardized scheme of manipulations is presented. The procedure is universally applicable, takes into account dependences within the equations, gives much less risk of calculation errors and is executed in a fraction of the time.Although the method can be performed fruitfully on a sheet of paper with the aid of a programmable pocket calculator, it is preferably done with spreadsheet programs such as LOTUS 123, QUATTRO and EXCEL to lower the risk of personal errors.2162 Analyst, October 1994, Vol. 119 Calculation of basic R (x, y , z, . . .) Spreadsheet Method The calculation scheme is presented schematically in Table 1 for the specific case of four measured quantities x, y, z and u. It is not a restriction. When we deal with n quantities the number of columns is extended to n + 1 and the number of rows is extended correspondingly.First the manipulations are outlined; later the logic will be explained. The procedure is as follows: Numerical differentiation by change of the diagonal elements with the absolute standard deviations SX SY SZ SU Arrange the n measured quantitites x , y, z, u, . . ., in the left-most column from the top down as a column vector. Calculate the value of R and put the result underneath the vector x , y, z, u . . .. When using a spreadsheet program, the equation for R is entered, but the value of R is displayed. Enter the value of R into the cell underneath by referring to the previous cell and making the cell address absolute (or by transferring its value by hand). Put the equation of the difference of the last two cells in the following cell in the column (first the difference AR will be zero in the first column).Calculate the squared difference in the next cell, entering the equation referring to the cell above (= zero again). The next step is to copy all filled cells of the first column n times into the columns to the right (n = 4 here as we adopted four variables in this specific case). An n x n square matrix of measured values appears (inside the double-lined area). The calculated R, the initial value of R, the difference and its square appear underneath again. So far all columns are still identical, but this is modified during the following steps. (8) Put in the row above the double-lined matrix the standard (h) Add the standard deviations to the diagonal elements of In the copying step, equations have been copied as equations and values as values.When the standard deviations are added to the diagonal elements in the double-lined matrix, new values of R will appear in the first row under the matrix. One row lower all cells contain the (constant) initial value of R. In the next row the differences of the last two cells are displayed. The change of R can be estimated with a MacLaurin series development. Generally, the changes of R are small, which implies that the higher order terms in the series development may be neglected. As only one variable is changed in a column of the matrix, only the first partial derivatives will appear in each cell. The sequential changes are equal to: (aRlax)s,, (aRlay)sy, (dRlat)s, and (Mlau)~,. By taking the square of the differences we obtain the terms of eqn.(2). Summing all terms gives the variance of R required. When the measured values are correlated with p known, the row with squares can be extended with the corresponding cross-product terms. In the rest of this paper covariance will not be considered any longer. It is not essential for the message. deviations in column sequence (sx, sy, s,, su, . . .). the matrix as shown in Table 1. Discussion The schematic calculation offers a number of advantages, as follows. The spreadsheet method is universally applicable. Going from one calculation to another only the equation for R in the corresponding cell has to be adapted. The rest of the Table 1 Schematic representation of the spreadsheet method I -I Note: establishing the confidence interval of the true value of R, the best known values of x, y , .. . (= their mean values) are taken in the first column together with the standard deviations of the mean in the top row. Generally the inquiries of the investigation determine whether standard deviations of the mean or of the individual are substituted in the top row. The bars (in f , s,, . . .) have been left out for ease of readability.2163 Analyst, October 1994, Vol. 11 9 manipulations, repeating R in value, taking the difference, squaring the differences and summing the squares, remain the same. The spreadsheet method is applicable in a standard manner to all error propagation calculations. Direct insight is obtained about dominating contributions to sR. Hence if the precision has to be improved, the related error source is immediately known.The influence of an improve- ment can simply be forecasted by changing s,. A low risk of calculation error exists. In the first column the equation for R(x, y, z, u) is introduced. As soon as the displayed value of R is correct, the copying process will no longer introduce errors. There is no violation of the condition of mutual independence. The calculated standard deviation will be correct. There is a substantial gain in time. The calculation of R always has to be performed. If its value has been found correctly in the corresponding cell, the rest of the calculation scheme can be performed in minutes as the only manipulations still to be performed are copying the column n times and adding the standard deviations to the elements on the diagonal.It can be noted that for the calculations a programm- able pocket calculator and a piece of paper will serve as well, but it takes some more time in moderately complex calcula- tions and has a larger risk of errors. Intermediate results can be introduced in the row above R (equation) if they are entered in the cell as equations. (See example 1 in Table 2; the logic will then be clear.) Different functions Ri related to the same measured x, y, . . . can be added from the the top down in the left-most column. Their sRi can be calculated in the same run (see example 2 in Table 3). A linearity check is quickly performed. Eqn. (2) was derived with the assumption that R changes linearly within the ranges s,, sy, . . . (= higher order terms in the MacLaurin series are negligible).When eqn. (2) is applied in the usual way, linearity should be checked. In numerical calculations, however, the higher order terms do not vanish. Therefore, linearity is easily controlled by subtracting the s values from the diagonal elements instead of adding them. Note that changes of a few per cent in SR are not important regarding the uncertainties that standard deviations usually have. The degrees of freedom of SR can be estimated simply in the last row of the spreadsheet by using the equation of Satterthwaite.1.233 The equation (Table 1) holds for non- correlated variables and is elucidated in the next section. Degrees of Freedom of SR Neglecting the correlation between the measured values, eqn. (2) changes into Means of four values (or more) follow a normal distribution to a good approximation because of the limit theorem.3 Conse- quently, their variances will follow a X2-distribution: (4) where v is the number of degrees of freedom of s.If s is the standard deviation of the mean, v = N-1; in case of regression v = N-k (k is the number of constants in the polynomial). The degrees of freedom of s,, sy, . . . are exactly known from the experiments. In contrast to s,2, etc., s R ~ follows its own distribution, which deviates slightly from a x2 distribution. Satterthwaite4 investigated the distribution of s R ~ for a number of cases and concluded that for all cases of practical interest the x 2 distribution may be adopted for sR2. He found that when vx, vy, Y,, . . . are small, the approximation is worse, but when calculating the confidence interval of sR the error always remains negligible.For larger v the error still decreases as the distributions of both sx2, etc., and sR2 approach a normal distribution (limit theorem). The conclusion is that all s2 in eqn. (3) can be replaced by substituting eqn. (4): In eqn. (5) all x2 are statistically variant quantities; the other quantities are constants. The left-hand side of eqn. (5) may be considered as a calculation result, which depends on the statistical fluctuating quantities (the ~ 2 s ) on the right-hand side. We can apply again the rules for error propagation and calculate the standard deviations on both sides of eqn. (5). As the standard deviation of the xv2 itself is equal to 2v, we obtain If finally all 0s are replaced by their estimators s, Satterth- waite’s relationship for estimating vR is obtained: This means that the reciprocal of vR is equal to the weighted mean of the reciprocal vs in which the weighting factors are formed by the square of the relative contributions of all error sources to the variance of sR.vR is rounded off to the nearest lower integer. The relative contributions are easily calculated in some extra rows at the bottom of the scheme from the contributions already present in the calculation scheme. Finally, vR is used to find the limiting t(95%, two-tailed) for establishing the confidence interval of R. Some Examples From Practice (1) A potentiometric measurement is performed seven times for the determination of Ag+ in solution.During the experiment the room temperature changed from 22 to 23 “C. The value of Eo is known from 41 previous experiments. The results are given in Table 2 together with their standard deviations. The equations used are ( E - 4 1 ) factor [Ag+] = 10 ( t - 20) factor = 58.1 + - 5 From Table 2, it follows that the direct determination of [Ag+] from potential measurements is not precise. Improve- ment is hardly possible because the determination of Eo is not precise from solution. If E is measured in order to follow2164 Analyst, October 1994, Vol. 119 concentration changes, a corresponding scheme will show that the precision of changes is much better as the influence of Eo is then eliminated. As a consequence, potentiometric titrations can be performed with high precision even if the potential jumps are not sharp.(2) After a calibration procedure to establish the relation- ship between stimulus x and response y, the straight calib- ration line is used to establish in two different samples the unknown contents xl and x2 from the corresponding responses y1 and y2. (Errors from drift, sample taking, sample pre- treatment, etc., are assumed to be absent for simplification, but can easily be introduced.) The line function is transformed into the orthogonal form y = a + b(x - Xz) in order to make the covariance between a and b zero. The variances of a and b are known from calibration. We use the reversed form x(y) in the spreadsheet scheme: The experiment was undertaken to establish a possible difference between the two contents x1 and x2.Prospectively x1 and x2 will be compared routinely by applying eqn. (1). sxl and sx2 are known (Table 3) and so the standard deviation sAX = V(s,l2 + ~ ~ 2 ~ ) = 0.040. Compared with Ax = 0.062 from the significance test it then follows that there is no reason to reject the null hypothesis (Ho: p1 = p2). However, this conclusion is wrong. It is overlooked that although yl and y2 are not covariant, x1 and x2 are not mutually independent as they are found with the same calibration line. Hence application of the simple rules [eqn. (l)] is not permitted. It should be emphasized that all results following a specific calibration are related to the same calibration line and hence are covariant! The correct answer is found by calculating both xl, x2 and Ax = x2 - x1 directly from the independent quantities a , b , yl and y2 in one spreadsheet.The standard deviation of Ax now found is smaller, 0.026, and the correct conclusion from the significance test is that Ho has to be rejected. From the separate contributions of a , b , y l and y2 to the standard deviation sAX, it will be obvious why sAX is smaller: neither s, nor sb contributes substantially. Looking at a graphical representation, it will be clear that x1 and x2 are dependent. Both x1 and x2 will change with a and b , Table 2 Potentiometric [Ag+] determination E = Eo = t = 663.0 mV 779.5mV 22.5 "C 58.6 mV 0.010279 0.010279 0 0 SE sEO t 1.0mV 1.0mV 0.5"C I 1 0.010691 0.009883 0.010359 0.010279 0.010279 0.010279 0.000412 -0.000396 O.ooOo80 1.70 x 1.57 x 6.38 x lo-' sAg2 = 3.31 X hence sAg = 5.78 x and [Ag+] = 102.96 x v = 6 df 40 df 1 df Weight = 0.260 0.222 0.004 I Weightlv = 0.0433 0.0055 0.004 I l/vAg = 0.0492, vAg = 20, t(95%, 20 df) = 2.09 Confidence interval [Ag+] = 0.0103 t- 0.0012 (95%, 20 df) 1 but a shift of the line does not change their difference Ax and a slight rotation will change Ax only marginally.From this example, it is obvious that if all quantities Rj are properly related by their equations to the measured values at the top left, the spreadsheet method automatically takes into account the mathematical dependence between the variables. (3) In I S 0 6976-1984(E),S the calculation of the calorific value, density and relative density of natural gas is described. In the document numerous pages are dedicated to the analytical procedure for calculating the standard deviation of the calorific value s,.Following this procedure, it is difficult to find the main sources of errors back in s,. For this reason, an investigation was undertaken at Gasunie Research in which the numerical spreadsheet method was compared with the I S 0 6976 procedure.6 About 125 error sources were involved. Some were forecasted to be important, but turned out to be unimportant. Others were overlooked. The analytical approach took months for completion. The numerical method took a few days and gave much more insight. The error sources were grouped for convenience. The main source was the literature values of the physical constants. Second, the error in the determination of nitrogen was important, as the calorific value of this compound is zero. It is now proposed to introduce the spreadsheet technique in the I S 0 specifications. Conclusions The numerical spreadsheet technique has five main advan- tages.First, if all quantities to be calculated are properly related to the measured values, in the top of the left-most column, there will be no accidental violation of the condition of mutual independence during the calculation phase. Second, compared with the analytical method there is no risk of making errors in calculating and differentiating. If R has been found correctly in the left-most column, all other manipula- tions consist of copying columns and adding standard devia- tions to the diagonal elements with standard spreadsheet commands.Generally these manipulations are performed with a much smaller risk. Third, the standard manipulations of Table 3 Comparison of results after a single calibration so sb Sv 1 0.051 0.014 0.045 $&5 I 12.63 yl 13.10 y2 13.25 5.193 5.173 5.192 5.212 5.193 X I X1 5.193 5.193 5.193 5.193 5.193 A 0 -0.021 -0.001 0.019 0 b A2 4.3 x 10-4 1.3 x 10-6 3.5 1 10-4 sX12 = 7.8 x 10-4 SXI = 0.028 x2 5.255 5.234 5.254 5.255 5.274 x2 5.255 5.255 5.255 5.255 5.255 A 0 -0.021 -0.001 0 0.019 A2 4.3 x 10-4 1.3 x 10-6 3.5 x 10-4 sx22 = 7.8 x 10-4 sx2 = 0.028 x Z - X ~ 0.062 0.06200 0.061 0.043 0.080 A 0 0.00000 - 3 . 6 ~ - ~ -0.019 0.019 x2-x1 0.062 0.06200 0.062 0.062 0.062 A2 0 . m 1.3 x 10-7 3.5 x 10-4 3.5 x 10-4 shr2 = 6.9 x 10-4 S& = 0.0262165 Analyst, October 1994, Vol. 11 9 the technique are identical from case to case. Once familiar with these manipulations, the calculations can be performed in a fraction of the time required by the classical technique. Fourth, the spreadsheet table gives direct insight into the magnitude of each error contribution. Finally, estimating the degrees of freedom of the calculated result with the Satterth- waite’s equation simply involves adding an additional row to the spreadsheet table. References 1 Satterthwaite, F. E., Biom. Bull., 1946, 2, 110. 2 Burdick, R. K., and Graybill, F. A., Confidence Intervals on Variance Components, Marcel Dekker, New York, 1992, p. 30. 3 Guide to the Expression of Uncertainty in Measurement, IS01 TAG4WG3, 1st edn., ISO, Geneva, 1993. 4 Satterthwaite, F. E., Psychometrika, 1941, 6,309. 5 Natural Gas-Calculation of Calorific Value, Density and Relative Density, IS0 6976-1984(E), International Organization for Standardization, Geneva, 1984. 6 Kenter, R., Struis, M., and Smit, A. L. C., Process Control Qual., 1991, 1, 127. Paper 3106337B Received October 2.5, 1993 Accepted February 2, 1994
ISSN:0003-2654
DOI:10.1039/AN9941902161
出版商:RSC
年代:1994
数据来源: RSC
|
|