|
1. |
Front cover |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 045-046
Preview
|
PDF (499KB)
|
|
ISSN:0144-557X
DOI:10.1039/AP98825FX045
出版商:RSC
年代:1988
数据来源: RSC
|
2. |
Contents pages |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 047-048
Preview
|
PDF (49KB)
|
|
摘要:
ANPRDI 25( 12) 377-404 (1 988) December 1988 Analytical Proceedings Proceedings of the Analytical Division of The Royal Society of Chemistry CONTENTS 377 The Automatic Methods Group: 21 Years of Progress 378 Esso Energy Award 378 Hilger Spectroscopy Prize 379 Honorary Publicity Secretary's Column 380 Analytical Viewpoint 'Why Analysts Need Chemometrics' by M. Thompson 380 38 1 SUMMARIES OF PAPERS 38 1 Recent Advances in Atomic and Molecular Spectroscopy 381 383 'Multi-component Luminescence Analysis of Some Nitrogen Heterocycles' by R. Jones, T. J. Coomber, J. P. McCormick, A. F. Fell and B. J. Clark 'The Application of Computer-aided Tomography to Analytical At'omic Emission Spectrometry' by Steve Hill and Les Ebdon 'High Dissolved Solids and ICP-MS: Are They Compatible?' by J. G. Williams and A. L. Gray 385 389 Computer Interfacing for Instrumental Control and Data Acquisition 'The Software Interface' by M. J. Adams 389 391 Training for Industry 'Statistical Process Control-a Problem Solving Methodology' by R. M. Wynne 39 1 'Training and Career Developments for Scientists' by Terry Gough 39 1 393 New Fields for Drug and Antigen Targeting 'Targeting of Protein Drugs' by E. Tomlinson 393 396 Equipment News 399 Japanese Government Laboratories Open up t o UK Scientists 400 Analytical Chemistry Trust Fund (Rules) 400 Publications Received 40 1 Conferences and Meetings 403 Courses 404 Analytical Division Diary Typeset and printed by Black Bear Press Limited, Cambridge, England
ISSN:0144-557X
DOI:10.1039/AP98825BX047
出版商:RSC
年代:1988
数据来源: RSC
|
3. |
The Automatic Methods Group: 21 years of progress |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 377-378
D. C. M. Squirrell,
Preview
|
PDF (367KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS. DECEMBER 1988. VOL 25 377 The Automatic Methods Group: Progress 21 Years of At its last Annual General Meeting the Automatic Methods Group celebrated its first full 21 years of service to the Analy- tical Division. It was early in 19.65 when, as result of requests from some of those active in the field, the Group Liaison and Policy Committee considered a proposal for the formation of an Automatic Methods Group of the Division and canvassed the views of the membership. It is of interest to note the opening para- graph of this communication which read as follows: “The proposal should be viewed against the general background of current development in analytical labora- tories. The insistent demand for more and more analytical information in the whole field of science and the increasing cost of gathering this information and the scar- city of suitable personnel are potent factors leading to the adoption of stream- lined, semi- or fully automatic analytical procedures whenever circumstances per- mit.These problems are common to many laboratories and it seems certain that present trends towards mechanisation and automation are likely to continue in analytical laboratories as elsewhere .” the start the Group Committees have been forward looking, seeking to present a balance of new developments, applica- tions, user experience a d forecasts for the future. To this end, the meeting on “Newer Systems in Automatic Analysis” (1968) was followed by a demonstration of the items of equipment described, and in 1970 a special meeting for members to demonstrate their own innovations was organised in London; some 20 systems were demonstrated.Industrial developments and applica- tions have formed a large part of the programmes, being particularly evident in joint meetings with the Division’s Regions. Business areas covered include steel (1967, 1968), pharmaceutical - clin- ical - biochemical (1967,1968,1971,1978, 1982, 1984), food (1980), automotive (1982), chemical (1968) and cosmetics (1984). Such general topics as effluent control (1968), solvent extraction systems (1968), corrosion (1979) and the determi- nation of water (1983) were also included. It was realised right from the start that sampling was of overriding importance in automatic analysis and, in consequence, 21st Anniversary of the Automatic Methods Group.Celebratory Meeting and Dinner held at The Brewery, Chiswell Street, London E.C.1 on January 28th, 1988. L-R: Mr. K. J . Leiper (Chairman of the Group), Mrs. Leiper, Professor D. Betteridge, Professor H. Wolff (Guest Speaker), Mrs. Wolff, Mrs. E. Betteridge, Mr. D. C. M. Squirrell (President of the Analytical Division) and Mrs. M . Squirrel1 In consequence of the enthusiastic response, Council approved the forma- tion of the Group, which held its inaug- ural meeting in November, 1965. Four meetings were held in the first year, at the end of which the membership had reached 239. Growth has been steady ever since and currently stands at about 750. From meetings devoted entirely to this subject have been a regular feature in the Group programmes. Sampling techniques used in cast iron manufacture, the fertiliser industry and in mineral processing were described in 1966.Automated sampling and weighing again received attention in 1968, covering the biscuit industry, auto- mated dispensing and sample handling for gas chromatographic analysis for process control. A special meeting on “Sampling Problems and Solutions” followed in 1975 and this meeting also included a contribu- tion on sampling for on-line analysis. A two day meeting in 1982 was devoted entirely to “Automated Sampling, Sam- ple Preparation and Presentation” and this included updates on robotics, and organic, pharmaceutical, and inorganic applications of sampling systems. It should be noted that organising commit- tees have all found it quite difficult to obtain presentations on sampling for on- line process analysis, mainly because of the understandable problems with con- fidentiality of the industrial processes themselves.In spite of this, useful meet- ings have been held dealing with the subject. Problems in the area were high- lighted at a meeting in 1983 followed by a full day meeting in the next year on “On-line Analysis.” “Safety and Automa- tion” has received due attention and there have been regular meetings from 1970 onwards (1970, 1972, 1975, 1979, 1981 and 1983) concerned with the use of automated systems for environmental analysis. Joint meetings with other Groups, Regions and Associations have led to a wide coverage of automated methods. For example, a meeting was held in 1966 on “Fully Automated Apparatus for the Determination of Carbon, Hydrogen and Nitrogen in Organic Compounds.” Con- tinuous flow analysers were discussed in 1967 and in the same year a meeting was held on “The Direct Measurement of Substances Separated on Paper and Thin- layer Chromatograms or by Electro- phoresis.The Resolution of Analytical Curves.’’ On-line electrochemical sensors were reviewed in 1970 with further regular reviews of sensors of all types arranged in subsequent years. Automated techniques in spectroscopy have been covered by several meetings devoted to this area of application. Early considera- tion was given to “The Place of Sophisti- cated Techniques in Consultancy Research Work” (1973) and automated turbidimetry received attention in the following year.The prime motivation for those con- cerned with the development and applica- tion of automated methods has always been improvement of the quality and value of the information produced, whether for use in research, investigation, process control or product quality con- trol. Analytical quality assurance has thus been duly reflected in several important378 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 meetings, including those of a general nature and those devoted to special areas such as pharmaceuticals, wine, foods, packaging and natural products. Three recent meetings were entitled “Quality Assurance in Automated Analytical Che- mistry” (1986), “Quality Assurance- Information and its Impact on Quality” (1987) and “Analysis, Information and Quality” (1988).The potential for computers in analysis was recognised by the Group early in 1967 by a meeting on “Computer Applications in Analytical Chemistry” and since then, at regular intervals, the Group Commit- tees have endeavoured to keep members well informed of applications and likely developments. Examples of the many meetings include “User Experience on Data Recovery Equipment in Automatic analysis” (1969), “Enhancement of Chemical Measurement Techniques Using On-line Digital Computers” (1972), “Application of Computers, Par- ticularly Microprocessors, to Automatic Analytical Instrumentation” (1975), “Robotics in Analytical Chemistry and Process Control” (1983) and “Expert Systems for the Analyst-The Applica- tion of Knowledge Based Computer Systems in the Chemical Laboratory” (1985).In addition to looking at “Com- puter Aided Chromatography’’ the over- all position was reviewed in 1986 at a meeting under the general heading “Com- puters in Analytical Laboratories- Where Have We Got To and Where Are We Going?” These are questions which, at the present rapid rate of development, must be asked quite frequently. Throughout its existence the Group has tried to emphasise the multi-disciplinary nature of modern analytical chemistry. As early as 1972 a meeting was organised on “Analytical Chemistry-The Need for Interdisciplinary Training. ” In 1975 this message was reinforced at a meeting on “Training for Instrumentation and Auto- mated Equipment” and again in 1985 with “Training Requirements for New Tech- nologies in Analytical Chemistry.” To look to the future has always been a regular feature of Automatic Methods Group programmes: “Future Trends in Automated Analysis” were reviewed in 1973,” “Market Research in Instrumenta- tion” in 1976 and “Analytical Automa- tion-The Way Ahead” in 1977, are examples.“High Speed Automated Analysis” was reviewed in 1980 and “Automated Analysis-The Expanding Choice” was discussed in 1981. Subjects have included the increasing number of combination techniques being used in analysis with ideas for “The Automated Analytical Laboratory of the Future” being explored in 1984. By this time it was clear that the speed of developments and the greatly increased power of modern technology must, in many laboratories, mean radical changes in equipment, organisation and management philo- sophy. Thus, the important meeting on “The Integrated Approach to Laboratory Automation” was organised in 1985 and directed to the attention of senior management. The direction of develop- ment was again signalled in 1987 at a similar meeting under the title “Analy- tical Chemistry-A Time for Change?” Thus, early in the third decade of its existence the Group enters another excit- ing phase as it seeks to share experience and provide a forum for discussion on such topical subjects as “Automated Sam- ple Preparation-Developing an Effec- tive Sample Interface,” “COSHH-The Analytical Implications,” “Process Analysis, Robotics and LIMS” and “The Dispersed Automated Laboratory,” all of which are subjects for future meetings. D. C. M. SQUIRRELL
ISSN:0144-557X
DOI:10.1039/AP9882500377
出版商:RSC
年代:1988
数据来源: RSC
|
4. |
Esso Energy Award |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 378-378
Preview
|
PDF (75KB)
|
|
摘要:
378 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 Esso Energy Award The Council of the Royal Society has made the Royal Society 1988 Esso Energy award to Mr. A. T. S. Cunningham, Dr. B. J. Gliddon, Dr. A. R. Jones, Dr. C. J. Lawn, Mr. M. Sarjeant, Dr. R. T. Squires, Dr. P. J. Street and Mr. P. J. Jackson of the Central Electricity Gener- ating Board for their work on improving the combustion of heavy fuel oil, which has led to improved use and mobilisation of scarce resources coupled with a de- crease in air pollution from large fur- naces. The changing pattern of demand for oil products has required that refining prac- tices be adjusted to maximise yields of premium products from crudes. There has been a concomitant deterioration in the quality of the “residual” or “heavy” fuel oil used in power generation.A major constraint on the burning of such heavy fuel is a restriction on particu- late emissions. These emissions largely comprise carbon particles (coke), which form from the individual oil spray dro- plets and remain unburnt. Poorer quality oils have an increased propensity to form coke and can give rise to unacceptable emissions. One way of countering these increases is to make the fuel spray finer and hence improve burnout. Research has been aimed firstly at quantifying the effects of those oil properties that directly influence coke formation and combustion, and then at developing improved atomisers and water - oil emulsions to reduce droplet sizes. Dr. C. J. Lawn delivered the Esso Energy Award Lecture at the Royal Society on November 17th, 1988. This lecture is given by the recipient of the Royal Society Esso Energy award on the work for which the Award has been made. The Award, which was presented at the lecture, consists of a gold medal and a prize of f2000 donated by Esso UK plc, and is normally made annually for out- standing contributions to the advance- ment of science or engineering or technol- ogy leading to the more efficient mobilisa- tion, use or conservation of energy resources. Hilger Spectroscopy Prize The Committee of the Atomic Spectroscopy Group of the Analytical Division of the Royal Society of Chemistry has awarded the 1988 Hilger Spectroscopy Prize to John G . Williams of the ICP-MS Laboratory, Department of Chemistry, University of Surrey, Guildford, Surrey.
ISSN:0144-557X
DOI:10.1039/AP9882500378
出版商:RSC
年代:1988
数据来源: RSC
|
5. |
Honorary Publicity Secretary's column |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 379-379
J. D. Green,
Preview
|
PDF (163KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS. DECEMBER 1988. VOL 25 379 Honorary Publicity Secretary‘s Column The Analytical Division’s Programme of Meetings for the 1988-1989 session is now underway. In fact, by the time these words are read, some of the arranged events will have already taken place. It is, however, still appropriate to remind those of you who have yet to study your programme to do so and, indeed, to commend to you the variety of meetings detailed within. Whilst some meetings are arranged directly by the Analytical Division through the Programmes Committee, others are arranged by local committees in the eight regional areas of the UK and by the twelve Subject Groups. As this year’s programme proceeds the future is not being neglected because the various organising bodies are looking forward to the 1989/1990 session and beyond, a continuous forward looking effort to pro- vide the opportunities for contacts and to improve the communications within and from the analytical community.Meetings provide a forum for the exchange of ideas and an opportunity to meet those with similar interests, respon- sibilities and problems. Often it is the unexpected presentation or discussion that proves to be most useful at such meetings. Ideas may often be developed in discussions with others who have similar problems or with those who may be using different techniques and pro- cedures for related applications. With instrumental analysis playing such an important role in today’s laboratories it can be valuable to exchange experiences regarding the capabilities and limitations of various systems; at the appropriate meeting such an exchange of views is possible.A brief look through the Programme of Meetings booklet will show how broad the range of meetings is. Topics range from aspects of laboratory organisation, through sample handling, separation techniques, spectroscopic studies, data handling/processing, to applications for specific areas of interest. Venues as far apart as Aberdeen and Southampton or Belfast and Hull feature in the pro- gramme. While there is no intention to be exclusive it is worth mentioning a number of meetings with a wide appeal, for it is at such meetings that interdisciplinary discussions can be most fruitful. The Research and Development Topics in Analytical Chemistry meeting is to be held next year in Dublin (March 21st- 22nd).This is intended as a forum for “younger” research workers to present their results to a wide audience. Interest, however, is by no means restricted to the “younger” element. Whilst the principal participants are from universities and polytechnics the meeting could provide an excellent occasion for those in industry to discuss the development and application of various analytical techniques to the solution of particular problems. The Annual Chemical Congress is to be held in Hull, at the University, in 1989 and as usual there is a related AD Symposium. The theme is to be “Process Analysis and Information Management,” chosen partly because of growing interest in both the academic and industrial sec- tors for such a topic.The highlight of the meeting will be the Theophilus Redwood Lecture to be given by Professor B. R. Kowalski, whose topic is, appropriately enough, “Process Analysis.” Towards the end of the 1988/1989 session “SAC 89” is to take place in Cambridge (July 30th-August 5th). This international meeting covers most aspects of analytical science and attracts speakers from all over the world. As the prepara- tions proceed all AD members will receive details in the usual way, so that the relevance of the presentations to particular requirements and interests may be assessed. There is a meeting for you or one of your colleagues in this year’s programme, and making the most of the opportunities starts with the programme details. It could be in your hands now! J. D. Green BP Chemicals Ltd., Research and Development Department, Hull Laboratory, Saltend, Hull HU12 8DS An Introduction to Applications of Light Microscopy in Analysis By D.Simpson and W .G. Simpson, Andysis fur Industry, Thurpe-Ze-soken Light Microscopy is one of the oldest techniques at the disposal of the analyst and is unfortunately greatly undervalued and under used in the analytical laboratory. It is, in fact, a conventional economical technique which should not be overlooked and can be of great value in the analysis of foods, pharmaceuticals, metals, plastics, water, agrochemicals, textiles and much more. In this book the authors draw upon their considerable experience in industry and consulting practice, to provide examples of the many and varied uses of light microscopy in analysis. They describe in detail its capabilities and seek to encourage its wider use in actual practice, reminding analysts of its qualities and applications. They also advocate good practice in its use. Microscopistst analysts and students alike will gain much from the authors’ enthusiasm and as a result may assist in extending the utility of the instrument into the future. Due Summer 1988 Price f 2950 ($63.0) ISBN 0 85 186 987 4. Hardcover approx. 2OOpp To order or for further information, please write to: Royal Society of Chemistry, Distribution Gmtre, Blackhorse Road, Letchworth, Herts SG6 IHN, UK. or telephone (0462) 672555 quoting your credit card details. We now accept Access/Visa/MasterCard/EuroCard. RSC Members are entitled to a discount on most RSC publications and should write to: Membership Manager, Royal Society of Chemistry, 30 Russell Square, London WCIB 5DT, UK. ROYAL SOCIETYOF CHEMISTRY hformation Services
ISSN:0144-557X
DOI:10.1039/AP9882500379
出版商:RSC
年代:1988
数据来源: RSC
|
6. |
Analytical viewpoint. Why analysts need chemometrics |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 380-380
M. Thompson,
Preview
|
PDF (171KB)
|
|
摘要:
380 ANALYTICAL PROCEEDINGS, DECEMBER 1988. VOL 25 Analytica I Viewpoint The following is a member of a continuing series of articles providing either a personal view of part of one discpline in analytical chemistry (its present state, where it may be leading, etc.), or a philosophical look at a topic of relevance to chemists in general or analytical chemists in particular. These contributions need not have been the subject of papers at Analytical Division Meetings. Persons wishing to provide an article for publication in this series are invited to contact the editor of Analytical Proceedings, who will be pleased to receive manuscripts or to discuss outline ideas with prospective authors. Why Analysts Need Chemometrics M. Thompson Department of Chemistry, Birkbeck College, Gordon House, 29 Gordon Square, London WClH OPP Why do we analysts need chemometrics when we have managed perfectly well without it (without “them”?) for a hundred years or so? The brief answer is that times are changing, and we must change with them or perish.To see why this change involves chemometrics requires us to appreciate what the subject consists of. The immediate impression for the newcomer is that chemometrics is not “user-friendly,” and that it consists mainly of forbidding material like matrix algebra, double summations and unfam- iliar mathematical transformations. This is an unfortunate impression, which stems from the mathematicians regard for generality and rigour and is perpetuated by chemometricians. In fact, most chemometrics is quite straightforward in essence, and its intentions correspond to what analysts do as a matter of course by experience and intuition, namely “turning data into information. ” Chemometrics differs from intuition, however, in that it attempts to do this in the most effective possible manner, usually with the help of a computer. Most aspects of chemometrics that are of interest to the analyst fall into one of four overlapping categories, namely statistics, design and optimisation, signal processing and pattern recognition.The first topic, statistics, is so fundamental to us that it constitutes the conceptual language of all analytical science. We can hardly speak of topics such as precision, bias, calibration, detection limit, validation, inter- ference, reference materials, data quality control, etc., without explicitly invoking statistical concepts.The next aspect, experimental design, deals with the methods of getting the quality and quantity of information that we need with the minimum use of resources. This too is of self-evident impor- tance. The third aspect, signal processing, is the business of converting the raw output of an instrument into the most accurate, most precise or the most cost-effective information regarding what we need to know, namely analyte concentra- tions. Finally, pattern recognition is the business of relating the analyte data to external and often less easily measurable features of reality. Examples of this are: identifying oil spills from their fluorescence spectra, African “killer bees” from their volatile hydrocarbons or the nature and severity of a disease from the concentrations of liver enzymes in the blood.If analysts are doing these things already, why do we need to do more? The answer is, unfortunately, that we are not doing them very well, especially in the UK. This is exemplified in statistics and experimental design. My impression is that about 20% of analytical papers are marred by inept or incorrect use of basic methods, or by a failure to apply them to suspect data. This deficiency gives the papers affected an amateurish flavour comparable with spelling mistakes, to the obvious detriment of the status of analysts. In contrast with this, we ought to be able to use basic statistics with the flourish of experts! Moreover, we should have at least a nodding acquaintance with newer techniques that are appropriate to analytical chemistry, methods such as non-linear fitting, robust methods, computer intensive methods and numerical methods.The problem goes well beyond simple statistics. In the area of signal processing, we seem too content to leave things in the hands of instrument manufacturers. Often the quality of data from a piece of equipment can be improved, not by expensive improvements in engineering, but by a cheap bit of data manipulation in the ubiquitous microprocessor. An implemen- tation of this approach will be an increasing trend over the next decade or two. But if we do not understand how the data processing is done, how can we stand by our results, for instance in a court of law? Should we allow the instrumentation engineers to monopolise this expertise, and risk our status being reduced to that of instrument minders? Pattern recognition is far too often perceived as falling outside the realm of the analyst.Too often we allow someone else to choose the sample and take away and interpret the results. In the past we concentrated almost exclusively on the technical aspects of analysis, which are admittedly very difficult, to the exclusion of its applications. This preoccupa- tion has, in my opinion, made a substantial contribution to the current low status of analysts. We should become much more involved in every aspect of the use of analytical data, including interpretation and pattern recognition where appropriate. An unwillingness to accept this will keep analysts chained up in the laboratory as mere producers of numbers, until the day comes (not too distant, now) when we can be replaced by intelligent machines. So what is to be done? Firstly, those responsible for teaching analytical science must mend their ways. They must devote a bigger proportion of already crowded curricula to chemo- metrics, at both undergraduate and postgraduate levels. In-service post-experience courses must also be provided. Moreover, individuals must be willing to learn new skills or enhance old ones. In addition, those who write papers about chemometrics must forego the temptation to use erudite but obscure methods of expression if simple explanations will suffice, in an attempt to make the subject more approachable. The Analytical Division recognised the need for action by instituting the Chemometrics Group in 1986. The group organises lively meetings with a very practical slant, often in collaboration with other Groups and Regions, which form an extremely useful introduction to the subject.
ISSN:0144-557X
DOI:10.1039/AP9882500380
出版商:RSC
年代:1988
数据来源: RSC
|
7. |
Recent advances in atomic and molecular spectroscopy |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 381-388
R. Jones,
Preview
|
PDF (1085KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS. DECEMBER 1988. VOL 25 38 1 Recent Advances in Atomic and Molecular Spectroscopy The following are summaries of three of the papers presented a t a Joint Meeting of the North East Region and Atomic Spectroscopy and Molecular Spectroscopy Groups held on March 29th-30th, 1988, in the University of Hull. Multi-component Luminescence Analysis of Some Nitrogen Heterocycles R. Jones and T. J. Coomber Analytical Development Laboratories, The Wellcome Foundation Limited, Dartford, Kent DA I 5AH J. P. McCormick, A. F. Fell and B. J. Clark University of Bradford, Bradford, Yorkshire BD7 7 DP Luminescence is now well established as a sensitive and selective technique for trace analysis of environmental sam- ples.1 In pharmaceutical analysis, however, its frequency of use is reduced.This may be a result of the current ascendancy of high-performance liquid chromatography (HPLC) and the broad band width, featureless luminescence spectra of drug components. In addition, for formulated pharmaceutical pro- ducts the sensitivity is not usually essential and HPLC with ultraviolet detection will often be sufficient for most analytical problems. The major advantage of luminescence methods over chro- matographic methods is, however, the reduced analysis time. A typical example is the determination of two or more closely related compounds in a pharmaceutical formulation. These could consist of the parent drug and its major degradation product, which would normally be determined in a stability study. A suitable model two-component system is the antiviral agent acyclovir [9-(2-hydroxyethoxymethyl)guanine] and its major degradation product guanine, which may form by acid hydrolysis (Fig. 1).Considerable overlap occurs in the emis- sion spectra of the components (Fig. 2). This problem is regularly found in the luminescence of drug compounds and precludes the use of single-wavelength measurement. 0 0 I H Ac yc I ov i r Guanine Fig. 1. Acid hydrolysis of acyclovir to guanine However, with the rapid development of the microcomputer and relevant software packages it is now possible to apply a number of deconvolution algorithms to the overlapping system to determine the individual components in a mixture. Experimental Fluorescence spectra were recorded by using a Perkin-Elmer LS-5 spectrofluorimeter.Solutions were prepared in 0.1 M sulphuric acid at concentrations 0-1 .O pg ml-1 for acyclovir and 0-0.83 pg ml-1 for guanine. Derivative and least squares calculations were carried out on a Perkin-Elmer 3600 data station using the QUEST software. In addition a 7700 data station was loaned by Perkin-Elmer Limited, together with the CIRCOM software for target factor analysis. 300 350 400 450 500 Wavelen gt hln rn Fig. 2. Fluorescence emission spectra. -, Acyclovir 0.184 pg ml-1; - _ _ , guanine 0.172 yg mIk1; ---., acyclovir 0.184 yg ml-1 + guanine 0.172 pg ml-1. Solvent, 0.1 M sulphuric acid; excitation wavelength, 270 nm Results and Discussion Although both acyclovir and guanine exhibit measurable natural fluorescence in acid, the emission spectra overlap considerably over the full wavelength range.Data Presentation Methods The overlap between spectra can best be examined by collecting a total luminescence spectrum in the form of an emission - excitation matrix (EEM). The EEMs were collected by scanning the emission spectrum at increments of excitation wavelength and displayed both as isometric and contour plots. Although there are slight differences in the emission spectra (Fig. 2), the EEM’s for both compounds overlap over the full range of wavelengths. Instrumental Methods These are essentially data processing methods, which are382 ANALYTICAL PROCEEDINGS. DECEMBER 1988. VOL 25 usually applied during acquisition of the spectra. Two methods were investigated, synchronous excitation and derivative spectroscopy.Synchronous excitation spectra are obtained instrumentally by scanning simultaneously the excitation and emission monochromators with a constant wavelength separa- tion between them (Ah) to give a 45" trajectory through the EEM. Synchronous spectra are optimised by varying the A L The effect is to narrow the emission band and possibly improve the resolution of overlapping spectra.2 On application of the method to acyclovir and guanine in a 1 + 1 m/m mixture it was not possible to resolve the overlapping spectra sufficiently to allow quantification. In a dosage form the level of guanine is much less and the possibility of demonstrating the presence of guanine is remote. In derivative spectroscopy, spectra are produced by differen- tiating the zero order spectrum with respect to wavelength. The even-order derivatives are the most useful and are charac- terised by a narrowing of the spectral band and an enhance- ment of narrow features against broad spectral bands.' The technique can be applied to excitation, emission or synchron- ous excitation spectra.In this work the best results were obtained by the second derivative (d2/dh2), which provides the most satisfactory compromise between enhanced resolution and increased noise, with the optimum discrimination resulting from a combination of the synchronous excitation spectra and second derivative (Fig. 3). However, the over-all effect of this combination was to produce a further improvement in discrimination without fully resolving the overlapping spectra. 100 > cn C al t al c .- c .- g o 0 v) 2 3 LL - -1oq 1 In this instance a commercial package was utilised (QUEST from Perkin-Elmer Limited) which can be applied to any spectral format.In carrying out the method on the samples of interest the variables which were considered in calculating the best fit included: spectral wavelength range; number of standard solutions used; concentration of the standards (which can contain both analytes). These must be chosen depending on the expected concentration and required precision for each analyte. The results of data manipulation from each experi- ment were transferred to a concentration grid with the positions of the standards marked and those results within 5% of the expected concentration highlighted to map out an area of acceptable responses (Fig.4). From this the optimum spectral format was selected by comparing the number of results within the acceptance criterion. Three mixed standards plus a solvent blank gave the best results in most cases. Acyclovir concentrationipg ml-1 0 0.1 0.2 0.3 0.41 0.51 1 .o 7 0.08 E - ' 0.41 * It * j l * * * 280 Wavelengthinm 330 I Fig. 4. A concentration grid for results estimated within 5% error limits. a, Standards used; Ir, samples estimated within 5% error limits Fig. 3. Second derivative synchronous excitation spectra. -, Acy- clovir 1.01 pg ml-1; ----, guanine 0.83 pg ml-l. Solvent, 0.1 M sulphuric acid; excitation wavelength separation, 120 nm Computer-aided Methods Digital methods were applied to the spectra stored by the microcomputer. Two methods of approach were examined; least squares and factor analysis.Both methods depend on mathematically fitting standard spectra to a spectrum of a suitable combination of components. In order to operate these methods successfully a broad understanding of the principles is required in order to select the most suitable set of conditions for valid results. Least squares This method fits a set of standard spectra to a test spectrum in order to minimise the sum of the squared residuals.4 A residual is the difference between the fitted standard spectra and the test spectrum at each wavelength. These residuals are then squared and summed with the contribution of each standard spectrum adjusted until the sum is minimised. The calculations use standard matrix algebra and can easily be carried out on a microcomputer.The underlying assumption with the method is that the test mixture has a spectrum which can be adequately represented as a sum of the standard spectra. The optimum spectral formats are shown in Table 1, where acyclovir and guanine require differing conditions. In general, the results are acceptable for determination of the major component acyclovir but poor for the minor component guanine at the trace level. Table 1. Ranking of spectral formats for the least squares method Acyclovir Guanine 1. Second derivative - 1. Synchronous excitation 2. Second derivative - 2. Excitation or emission 3. Excitation spectrum 4. Emission spectrum synchronous excitation synchronous excitation spectrum 3. Synchronous excitation Factor analysis This method has previously been used to analyse data from a number of analytical techniques, including luminescence .5 It is based on principal component analysis (abstract factor analy- sis) of a calibration set of samples.The first principal component isolates the majority of the variation in the calibration set, the second, most of the residual variation, and so on. For an ideal two-component mixture there should be two significant principal components, the others representing noise in the data. Factor analysis is a broad term for a number of techniques and includes rank annihilation factor analysis, which has previously been used in fluorescence assays by Warner andANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 3x3 co-workers.6 The disadvantage is the need for a full EEM for the test mixture and standards which nullifies the time advantage of spectrometric methods unless a rapid scanning instrument is used.As an alternative to this approach target transformation factor analysis7 can be used. A commercial system (the CIRCOM principal components regression model from Perkin-Elmer Limited) was employed. Again any spectral format can be used, although it is usual to calibrate with a larger number of standards than used in the least-squares method. In the CIRCOM program a non-ideal response is compensated for by use of more than two principal components, with the number selected by empirical indicator functions.8 The initial work on the acyclovir - guanine spectra with CIRCOM was carried out by using synchronous excitation spectra, which were considered to give the largest discrimina- tion between the overlapping components.Using this spectral format four principal components were required, which indicated some interaction between acyclovir and guanine. From the results obtained the predicted concentrations for acyclovir were all within 5% of the expected values. There was also a considerable improvement in the results for guanine ; however, determinations of this component at trace levels is still not completely reproducible. Conclusions This experimental programme illustrates the wide range of methods available to the analyst for resolving overlapping luminescence spectra. The two component acyclovir - guanine mixture chosen as an example presents a particularly difficult problem became of the marked similarities between the spectra.Examination of the EEM and of data from the instrumental methods of synchronous or derivative spectroscopy showed insufficient discrimination between the spectra to allow quan- tification. The matrix mathematical methods of least squares or factor analysis applied to the data indicated that the synchronous excitation spectrum was the most suitable for the determina- tion of acyclovir, whilst guanine could be best measured from the second derivative synchronous excitation spectrum. Using least squares acyclovir could be measured with some certainty but the method was less successful for guanine. Target transformation factor analysis gave accurate values for acyclovir and guanine at higher concentrations.At the low concentrations of guanine expected in a degraded pharmaceut- ical preparation the results were less accurate. Despite these limitations the model has allowed an investiga- tion of these techniques for broader applications in drug analysis as a rapid alternative to chromatography. 1. 2. 3. 4. 5 . 6. 7. 8. References Futoma, D. J., Smith, S. R., and Tanaka, J., Crit. Rev. Anal. Chem., 1982, 13, 117. Clark, B. J., and Fell, A. F . , 1. Pharm. Pharmacol., 1983, 35, 22P. Miller, J . N., Ahmad, T. A., and Fell, A. F., Anal. Proc., 1982, 19, 37. Sternberg, J . C . , Stillo, H. S . , and Schwendeman, R. H., Anal. Chem., 1960, 32, 84. Malinowski, E. R., and Howery, D. G., “Factor Analysis in Chemistry,” Wiley Interscience, New York, 1980. Ho, C. N., Christian, G .D., Davidson, E. R., and Callis, J. B., Anal. Chem., 1978, 50, 1108. Fredericks, P. M., Lee, J . B., Osborn, P. R., and Swinkels, D. A. J., Appl. Spectrosc., 1985, 39, 303. Fredericks, P. M., Lee, J. B., Osborn, P. R., and Swinkels, D. A. J., Appl. Spectrosc., 1985, 39, 311. The Application of Computer-aided Tomography to Analytical Atomic Emission Spectrometry Steve Hill and Les Ebdon Department of Environmental Sciences, Plymouth Polytechnic, Drake Circus, Plymouth PL4 8AA Commonly used sources in analytical atomic emission spec- trometry, such as flames, the inductively coupled plasma (ICP) and the direct current plasma (DCP), all exhibit spatial heterogeneity. In order to understand and ultimately control the processes involved in converting the original sample into atomic species, and to locate the optimum viewing position for each of these species to make measurements, we require some form of spatial map.Various approaches have been used to obtain this information using optical probes. Photomultipliers may be used but the measurement process is very slow. Olesik and Hieftjel have quoted the example of producing a 500 x 50 array with a 1 s per point integration taking 6.9 h. The traditional way of capturing a two-dimensional image of the source at a single moment in time is on a photographic medium. Various techniques have now become available, with the introduction of powerful but inexpensive microcomputers, which allow us to re-evaluate the use of photographic media using digitisation and image enhancement techniques.Fig. 1 shows the DCP viewed in this way, with and without sample introduction. These images were produced from photographic negatives by interfacing a scanning microdensitomer to a BBC microcomputer.2 This produces a matrix where the row and column indices identify a point in the image, and the corresponding matrix element a value which identifies the grey level at that point. Although the human eye can distinguish between 20 or so monochrome grey levels, it is also possible to Fig. 1. Digital images of a DCP produced by interfacing a scanning microdensitometer with a microcomputer: ( a ) , without sample; ( b ) , with sample introduction use a special case of pseudo-colour processing termed density slicing to bring out hidden detail in low contrast areas of the image.However, although such image enhancement tech- niques are very useful, the acquisition of quantitative data is complicated by the non-linear response from photographic emulsions, and the fact that since we are viewing from a single384 ANALYTICAL PROCEEDINGS. DECEMBER 1988, VOL 25 direction, the data obtained is the sum of the signal produced in that direction. Today, new optoelectronic devices allow the collection of data in a number of different ways. One such device, the photodiode array, may be considered as a sort of electronic photographic plate, which allows us to obtain quickly a set of data points corresponding to a one-dimensional slice of the source. Basically, a self scanning photodiode array is a large scale integrated circuit fabricated on a single monolithic crystal. It contains a row of sensors, typically on 25 pm centres, along with scanning circuitry for sequential readout.The main development in recent years, however, with regard to the use of these devices for atomic spectrometry, is the introduction of arrays with a spectral response down towards 200 nm, Fig. 2. The use of such arrays as detectors for ICP spectra has been reported3 and various spectroscopic applications evaluated.4 However, for spatial studies,sJj an Abel transform is required to convert the lateral projection data into their radially resolved equivalent. Unfortunately, the inverse Abel trans- form has several limitations, notably the requirement for symmetry and a marked sensitivity to noise. 'T 4001 I 200 300 400 500 600 700 800 900.1000 1100 1200 Wavelengthinm Fig. 2. Spectral response of photodiode array r.;;; Interface I , lp@-r-Entrance array slit 7 Lens Jets above holes in burner r l Plan view ' Side elevation Fig. 4. evaluation of tomographic system Configuration of holes in the laminar flow burner used for the formula (now known as the Radon transformation) to allow an estimation of the object under investigation from a discrete number of projections. Perhaps the simplest of these recon- struction methods is that of back projection, the theory of which has been discussed in detail by Brooks and Di Chiro.8 Here the projected profiles are collected at a number of positions around the source and the reconstruction obtained by reversing the projection process.The density at each point in the reconstruction is estimated by adding the sums of the rays that pass through that point. The more projections used, the better the reconstruction. However, because the background signal is averaged along its entire optical path, artefacts are observed resulting in a blurring of the image. For this reason, a Turntable Stepper motor Fig. 3. Schematic diagram of system used for tomography In order to overcome these problems the technique of computerised tomography can be used. This approach requires no special symmetry and is much more tolerant to noise. The basic principle behind the technique is to use multi-angular integrated measurements (known as projections), in order to produce a two-dimensional map. The tomogram so produced is then literally a picture of a slice through the source at a given orientation.The theory of such image reconstruction can be traced to Radon in 1917.7 Radon's initial formula allowed an estimation of the spatial features of an object to be calculated, if all the projection angles were known. It is, of course, not always possible to collect projections from all angles and so various algorithms have developed based on line-integral Fig. 5. flame: ( a ) , 1 cm above the burner; (b), 5 cm above the burner Tomograms produced of cross section of the laminar flowANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 385 filter is used in a technique called filtered back projection. Ideally, the reconstruction should then be exact because the modification process performed on the profiles exactly coun- terbalances the blurring artefacts of back projection.The system developed for this work is shown schematically in Fig. 3. The source, plasma or flame, is mounted on a turntable rotated by a stepper motor. An image of the source is focused on to the entrance slit of a Czerny - Turner monochromator with the photodiode array mounted at the exit slit. All data acquisition is controlled via an intelligent interface, which also controls movement of the stepper motor. The complete system is controlled by software comprised of a number of sub- routines allowing all experimental parameters such as the number of diodes to be selected, and the number of projection angles to be pre-set. Two filtered back projection algorithms are used, the first based on the RECLBL Library package9 and the second on the algorithm given by Brooks and Di Chiro.8 Output can either be directly to a printer or displayed on a monitor following visual enhancement by using a colour look-up table to assign a “false” colour to each of the grey levels in the image.In order to demonstrate the results obtained with this system a laminar flow air - acetylene flame can be used. The burner head consists of seven holes aligned as in Fig. 4. The burner is mounted on the turntable and various cross sections of the flame reconstructed tomographically. From Fig. 5(a) it can be seen that at a height of 1 cm above the burner each of the jets from the holes in the burner head can be clearly distinguished. However, higher in the flame, Fig.5(b), the individual jets condense to form a structured laminar flow. Once again it is also possible to aid visual enhancement by employing a colour look-up table. The reconstructions obtained to date, both from com- puterised images of the ICP and DCP, and real data from flames, demonstrate the potential of computerised tomo- graphy for spatial studies of sources employed in analytical emission spectrometry. The system is presently being applied to the ICP in order to aid a better understanding of the processes involved during sample introduction, which will directly aid our on-going work on solid sample introduction as slurries. Although a number of areas need further investiga- tion, such as structure instability and possible aliasing of features, the technique promises much information which is difficult, or impossible, to obtain by using other techniques.The authors would like to thank Professor D. Betteridge (BP Research Centre, Sunbury) for stimulating our initial interest in tomography, Dr. R. Belchamber and Mr. D. Roberts (BP Research Centre, Sunbury) and Mr. H. Drummond (Imperial College, London), for help in developing the software used in this work. 1. 2. 3. 4. 5. 6. 7. 8. 9. References Olesik, J. W., and Hieftje, G. M., Anal. Chem., 1985,57,2049. Glynn, P. J., and Larbalestier, G., 1. Microsc., 1985, 138, 69. McGeorge, S. W., and Salin, E. D., Spectrochim. Acta, Part B , 1985,40,435. Talmi, Y., Editor, ACS Symposium Series 236, “Multichannel Image Detectors,” Volume 2, American Chemical Society, Washington, DC, 1982.. Blades, M. W., Appl. Spectrosc., 1983, 37, 371. Blades, M. W., Caughlin, B. L., Walker, Z . H., and Burton, L. L., Prog. Anal. Spectrosc., 1987, 10, 57. Radon, J., Ber. Verb. Saechs. Akad. Wiss. Lpz., Math. Phys., 1919, 69, 262. Brooks, R. A., and Di Chiro, G., Radiology, 1975, 117, 561. Huesman, R. H., Gullberg, G. T., Greenberg, W. L., and Budinger, T. F., “Donner Algorithms for Reconstruction Tomography,” RECLBC Library, Lawrence Berkeley Labora- . tory, California, Publication 214, 1977. High Dissolved Solids and ICP-MS: Are They Compatible? J. G. Williams and A. L. Gray ICP-MS Unit, Department of Chemistry, University of Surrey, Guildford, Surrey GU2 5XH Inductively coupled plasma mass spectrometry (ICP-MS) has attracted widespread interest with its capability of very low limits of detection and its spectral simplicity.It does, however, suffer from some problems that are unique to the technique itself, resulting from the method of ion extraction from an external ICP source. Samples that contain high levels of dissolved or suspended solids can cause blockage of the sampling cone orifice of the interface. This problem is particularly acute with geological matrices, where some elements have to be determined at ultra-trace levels in a matrix 1) (VG Elemental, Winsford, Cheshire). Sample introduction was carried out by using a de Galan V-groove nebuliser (Van Der Plas Products, Voorshoten, The Netherlands), operated in conjunction with a single-pass water cooled spray chamber maintained at 13 "C with tap water.The demountable torch was of the Fassel type, with injector tubes of 1.5 or 3.0 mm diameter as required. of, for example, aluminium, iron and manganese at several thousand pg ml-1. Sample dilution can be used to reduce the concentration of the matrix, but dilution of the trace elements to levels below the limits of detection may then occur. The nebulisation of undiluted samples inevitably leads to rapid cone blockage. Samples can be pre-treated to remove the matrix material; however, this is often time consuming and can lead to sample contamination. In the work reported here, the extent of cone blockage is - Table 1. PlasmaQuad operating parameters Plasmapower . . . . . . Reflectedpower . . . . . . Coolantargonflow . . . . . . Auxiliaryflow . .. . . . Carrier flow . . . . . . . . Pumped sample uptake rate . . Samplingconediameter . . . . Skimmerconediameter . . . . Load coil-sampling cone spacing . . 13OOW . . <low . . 14lmin-l . . 0 . . 0.731min-I . , 1.lmlmin-l . . 1.0mm . . 0.7mm . . 10mm assessed, together with other matrix related problems, under a variety of instrument operating conditions, using both artificial matrices and a real geological sample. Reagents and Sample The synthetic matrix was composed of 1000 pg ml-1 aluminium solution with 1 pg ml-1 of Bi, Co, Mg, Pr and Y. In addition, a Experimental Instrumentation The instrument used in this work was a PlasmaQuad (Version solution containing only the trace elements at 1 pg ml-1 was prepared. All standards were of Specpure grade (Johnson Matthey Chemicals, Royston, Hertfordshire).The geological matrix was an igneous silicate rock, a diorite (GC-191), from386 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 the Glen Doll region of the Grampian Mountains, and was digested using a hydrofluoric - perchloric acid mixture.1 Procedure The instrument was operated under the conditions shown in Table 1. Ion lenses were re-optimised for each new experiment unless otherwise stated. Each of the experiments described below was continued until the signal response from any one of the analyte elements fell to below half of the starting signal response. When the experiment was complete, the integrals for the analyte elements for the complete experiment were plotted against time. Between each individual experiment the sampling and skimmer cones were cleaned and residue matrix material removed.In some experiments a solution containing only the analyte elements was introduced in addition to the solution containing A1 matrix. The results of this are dealt with where relevant. Table 2. Matrix components of Diorite GC-191. 1 g in 100 ml final dilution YO mlm pg ml-1 Element (as oxide) (as element) Si02 62.75 2934* Ti02 0.85 51 A1203 13.36 707 FeO (Total Fe) 6.20 487 5.32 32 1 0.10 7 MnO CaO 5.75 41 1 Na,O 2.08 154 MgO KzO 2.26 188 p2°5 0.22 10 * Si lost in digestion as SiF. Experiment 1 For this experiment the solution containing 1000 pg ml-1 of aluminium matrix together with the trace elements was nebulised as a sample, i.e., analysis (data acquisition) for 1 min, wash out with de-ionised water for 2 min, analysis for 1 min, etc., using a 1.5 mm injector tube in the torch.Experiment 2 In this the A1 matrix solution was introduced continuously (with no wash out), with one actual (1 min) analysis being carried out in every 5 min. Experiment 3 Experiment 1 was repeated, but with the use of a 3.00-mm injector in the torch in place of the 1.5-mm injector. 600 I 1 - r 450 2 400 8 350 2 300 250 I v) 3 m 1 a on g 200 Cr I , I a b c ”.””=%,+ Bi I 0 100 200 300 400 Ti me/m i n 150 Fig. 1. injector tube Aluminium, 1000 p.p.m., run as a sample using a 1.5-mm Experiment 4 Here experiment 2 was repeated, but with the use of a 3.00-mm injector in the torch in place of the 1.5-mm injector. Experiment 5 This was a repeat of experiment 3, but in place of the synthetic matrix a rock sample, the contents of which are shown in Table 2, was nebulised into the plasma.The data for this experiment were considered with and without the individual use of two of the analyte elements as internal standards. a b 50 a 1 0 20 40 60 80 100 120 140 160 Ti me/mi n Aluminium, 1000 p.p.m., run continuously using a 1.5-mm Fig. 2. injector Results and Discussion Fig. 1 shows the results from experiment 1. Signal response losses of between 5 and 25% occur for all elements in the first 50 min. At the point marked “a” only water was aspirated for 10 min, after which the matrix solution was replaced, shown as point “b”. At this point the signal responses for all elements returned to their original level, or in some instances slightly higher. From the points marked “b” to “c” the rate of signal loss was far greater than in the period of signal loss from time zero to the point marked “a”.At the point marked “c”, which approximates to the signal response that would have occurred if water had not been aspirated for a period of time, signal response losses for all elements, except Bi, are far less rapid, although Bi continues to decline rapidly. 360 5 320 2 280 7 240 v) U v) 3 w - v) v) 5 ’200 $ 120 8 F 160 1 C 80 40 a CT 0 100 200 300 400 Ti me/m i n Fig. 3. Aluminium, 1000 p.p.m., run as a sample using a 3.0-mm injector The return of the elemental signal response to its original level at the point marked “b” is probably the result of the partial removal of the blockage material while water was being aspirated in place of the matrix solution.Subsequent (rapid) cone blockage was most likely a result of the remainingANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 387 deposited material in the sampling orifice assisting the blocking process. Fig. 2 shows the results of experiment 2. Initially, in this experiment, the trace element only solution was aspirated; the corresponding signal response is shown at point “a”. This was directly followed by the introduction of the A1 matrix solution, shown at the point marked “b”. The difference in signal between the two analyses, of about 30% for all elements, was the result of signal suppression caused by the A1 matrix, not cone blockage. Signal loss due to cone blockage, which begins at the point marked “b”, was rapid and within 40 min a plateau for signal response had been reached for all elements, with signal response losses corresponding to between 40 and 50%.At the point marked “c”, at the end of the experiment, the matrix solution was replaced with the non-matrix solution, which resulted in an increase in signal of about 30% for all elements. The percentage signal recovery, caused by the lack of the A1 suppressant, corresponded to the initial percentage loss of signal due to suppression at the start of the experiment, despite actual signal response reductions caused by cone blockage between the two analyses. At the point marked “d” the ion lens system was retuned for maximum elemental signal, which is shown as the final point on Fig. 2. In most instances the final elemental signal responses are the same or very similar to those found at the start of the experiment, shown at point “a”.Signal loss is not, therefore, simply a result of a reduction in the number of ions entering the ICP-MS system, but more likely a modification of the extraction process as a result of the reduction in the orifice diameter. - 400 , A Pr U 1 I 1 0 40 80 120 160 200 240 Timeimi n Fig. 4. injector Aluminium, 1000 p.p.m., run continuously using a 3.0-mm The results of the third experiment are shown Fig. 3. The point marked “a” is the response from the solution that contains no Al, only trace elements, and point “b” the response with the solution containing the A1 matrix. The level of signal suppression is similar to that obtained with the 1.5 mm injector tube, about 30%.The pattern of signal loss for all elements as a result of cone blockage using the 3.0 mm injector, is not, however, the same as that obtained using the 1.5 mm injector. Rapid signal loss only occurs for about the first 30 min, up to the point marked “c”, where a plateau begins for most elements, with no further signal loss and some elements actually exhibiting a small increase in signal response. This plateau persists up to the point marked “d”, about 250 min, after which signal loss begins to occur again and after about 400 min over half the initial signal has been lost, up to point “e”. At this point the A1 matrix solution was replaced with the trace element only solution, resulting in an increase in elemental response (no suppression from Al) of about the same percentage as the initial signal loss owing to suppression.This behaviour with respect to analyte signal suppression caused by the presence of the A1 matrix is very similar to that seen in experiment 2. Fig. 4 shows the results from experiment 4. The pattern of signal loss is very similar to that shown in Fig. 3 except that it only takes about 230 min to achieve approximately 50% signal loss. The suppressive effect of the addition of the A1 matrix is reflected in signal loss between point “a” and point “b”, where the initial solution containing no A1 matrix was replaced with the one containing the A1 matrix. Although the use of a 3.0 mm injector tube does not solve the problem of cone blockage, it does go some way to alleviating it.The pattern of signal loss due to cone blockage is different for each of the injectors studied. Continuous nebulisation of the A1 matrix creates the exaggerated cone blockage. If the 1.5-mm and 3.0-mm injectors are compared in this situation (Figs. 2 and 4), it can be seen that using a 1.5-mm injector causes a rapid signal loss of up to 50% in the first 30 min followed by the signal reaching a plateau. With the 3.0-mm injector, initial signal loss is far less severe and a signal plateau is achieved very rapidly (10-15 min) which is a more desirable situation for routine analysis. 240 73 C co 2 200 5 1 - 7 160 0 v) + 2 120 0 cO f 80 i 40 v) [r 1 al 0 +\ +’+ Cr ‘+ ‘++++.++-+\ +\ ++++. + +*+ +. +-+,+- +-+-+ +, +.+ +. P Ba V Pr 20 40 60 80 100 120 140 160 180 Time/min Fig.5. GC-191 run as a sample using a 3.0-mm injector The results shown in Fig. 2 lead us to conclude that allowing partial cone blockage to occur, followed by re-tuning of the ion lenses, would provide a means of carrying out analysis in a given matrix for extended periods of time, with minimal subsequent signal losses. Lichte et aZ.3 have used a similar approach for the analysis of geological materials and Douglas and Kerr4 concluded from their study of cone blockage that this pseudo-steady state could be achieved and maintained for use in real sample analysis. 240 . m 2 200 - \ x T * x-x,,-x X-x-Y-X-x. ,x-x-x x-x- x-x.x. x/%-x. x.x-x-x- x- 8 0 - - $ 1 I i 401 A h Y Pr !c Y 0 20 40 60 80 100 120 140 160 180 Time/min Fig. 6. Effect of cobalt as an internal standard on analvsis of GC-191388 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 Internal Standards An internal standard is often used as a method of compensa- tion for signal drift of the type described here. The feasibility of using an internal standard for signal loss compensation was assessed by using a real rock sample.Table 2 shows the elemental contents of the rock sample GC-191, both as percentage oxide in the rock and (as the element) in solution at 1 g in 100 ml dilution. In practice the Si02 is lost during the digestion process and would not be present in solution at this concentration. However, there is still a high level of dissolved solids material from elements such as Al, Fe, Mg and Ca. Fig. 5 shows the results of experiment 5. Of the six trace elements studied, Co, Cr and Ba show the most serious loss of signal over time, with Co and Ba showing a loss of about 50% in 180 min.However, phosphorus exhibits little signal loss over the time period studied. Fig. 6 shows the effect on this data of using Co as an internal standard. Most elements respond well to internal standardisa- tion except P, which exhibits an increasing signal trend with time. For internal standardisation to be useful in multi-element ICP-MS it appears that several internal standard elements would be desirable, particularly if the analytes differ in mass and/or ionisation energy. It has been suggested2 that selection of multiple internal standard elements to compensate for suppression in ICP-MS is necessary, which is also indicated by this work. The help of Marina Totland (University of Victoria, Canada) and Dr. Kym Jarvis in the preparation of this work is greatly appreciated. The ICP-MS facility is funded by the NERC. JGW acknowledges financial support from MOD. References 1. 2. 3. 4. Jarvis, K. E., PhD Thesis, CNAA, 1987. Thompson, J . J., and Houk, R. S . , Appl. Spectrosc., 1987,41, 801. Lichte, F. E., Meir, A. L., and Crock, J. G.,Anal. Chem., 1987, 59, 1154. Douglas, D. J . , and Kerr, L. A., J. Anal. At. Spectrom., 1988,3, in the press. VOLUME IIA Senior Reporter: M. C. R. Symons, University of Leicester Hardcover 209pp ISBN 0 85186 861 4 Price 5339.50 ($139.00) RSC Members Price 537.50 From volume 10 onwards this series is split into two parts: Part A covers organic and bio-organic E.S.R. and Part B covers inorganic and bio-inorganic E.S.R.. Parts 'A and B will be published in alternate years. Volume 11A covers the literature published u p to mid 1987. Brief Contents: Organic Radicals in Solution; Theoretical Aspects of E.S.R.; Spin Labels: Biological Membranes; Free Radical Studies in Biology and Medicine: E.S.R. of the Conformation of 5 and 6 Membered Cyclic Nitroxide (Aminoxyl) Radicals. Specialist Periodical Report ROYAL SOCIETY OF CHEMISTRY Information Services Ordering: RSC Members should send their orders to: Membership Manager, The Royal Society of Chemistry, 30 Russell Square, London WClB 5DT, UK. Non-RSC Members should send their orders to: The Royal Society of Chemistry, Distribution Centre, Blackhorse Road, Letchworth, Herts SG6 1HN, UK.
ISSN:0144-557X
DOI:10.1039/AP9882500381
出版商:RSC
年代:1988
数据来源: RSC
|
8. |
Computer interfacing for instrumental control and data acquisition. The software interface |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 389-390
M. J. Adams,
Preview
|
PDF (281KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 389 Computer Interfacing for Instrumental Control and Data Acquisition The following is a summary of one of the papers presented at a Meeting of the Chemometrics Group held on December 16th, 1987, in Imperial College, London SW7. The Software Interface M. J. Adams School of Applied Sciences, The Polytechnic, Wolverhampton, Wulfruna Street, Wolverhampton WVI ISB Chemometrics is concerned with the transformation of raw analytical data to useful information for decision making. To this end, chemometrics is intimately linked with the allied fields of instrumentation science and analytical chemistry. According to Finkelstein, scientific instruments can be regarded as information machines, with a requirement to maintain some prescribed functional relationship between their input and output.’ Considered in this manner, the design, description and application of analytical instrumentation forms the basis for a systematic instrument science.An instrument can be classed as a system with a hierarchical structure. It is composed of interconnected simpler sub-systems organised to perform specific functions and in a computerised instrument the software can serve to link these functional units. Of the many classes and forms of model which can describe such an analytical system, the information-flow model is of particular relevance in the design of computerised units and on-line data processing. This model describes the logical transmission of data throughout its measurement and processing. It can be represented by the familiar flow chart, as used in program design, and can provide the basis for an integrated, automatic measurement system.Functional Analysis A functional analysis of analytical instruments indicates only a small variety of basic schemes.1 The principal functions are the acquisition of data, its processing and its presentation to other information users, human or machine. The detailed tasks to be undertaken within a computerised system in the analytical laboratory should be identified and the relative priorities of these tasks, in using the computer’s processing unit, must be appreciated before any data processing scheme can be deve- loped. The computerised functions and typical priority classifi- cations are as follows. Data collection. This can be manual, via a keyboard, or automatic via analogue or digital I/O ports.This would usually be classed as a high priority task as no data should be missed because the system is busy elsewhere. Data manipulation and analysis. This covers activities which can range from simple scaling and formatting of data to complex numerical operations involving, say, Fourier trans- formation, classification, etc. The priority assigned to such data processing is dependent on what needs to be done on-line in a closed-loop arrangement (high priority) and that which can be assigned to low priority, off-line processing. Output and reporting. The simple video display or printed copy have a low priority, whereas control output for a fully automated closed-loop system has a high priority.Data storage. This differs from other output functions in using block transfer techniques and, usually, dedicated soft- ware within an operating system. During most of the analytical and measurement process, data storage operations are not active but when required and during the actual data transfer process they assume a very high priority. These computerised tasks are dependent on the quality and quantity of the input raw data. Two important parameters determine the measurement efficiency, the rate of data acquisition and its dynamic range. The number, n , of measured values recorded per unit time is given by n = l / t and f = 1/2t where t is the time required (response) for a steady, true value to be attained andfis the highest frequency correctly recorded by the instrument. The dynamic range and resolution of the measurement process are limited by the measurement error.For recording and storing m equally probable values, the number of bits, S, required is given by S = log2 m bits An analytical signal, however, can rarely be accurately represented as a random process, and the recorded values are not equally probable. In a sequential series of measurements of an analytical signal, e.g., a spectrum, each value is likely to be highly correlated with neighbouring values. A priori informa- tion concerning the measured signal and any such correlation can be used to reduce the data storage requirements.2.3 Results using infrared spectra indicate that substantial savings of digital memory can be achieved using error free coding techniques2 Serial Processing and Concurrency The simplest, and most common, forms of laboratory com- puterisation involve batch processing, in which all data is recorded prior to analysis and reporting, or serial processing, which involves recording a value and processing it before the next value is acquired.Such schemes may reflect the conven- tional, manual procedures but usually they do not take full advantage of the microprocessor’s inherent capabilities for high speed data processing. The growing amount of data produced by modern measuring instruments places a greater reliance on automatic processing and data reduction. Similarly, dynamic measurements have become increasingly important in on-line, process-control systems. In addition, many modern industries require instrumentation to quantify measures of product quality, such as taste and palatability in food science, digestability of animal feedstuffs, etc.The need for multi- variate measurements to establish scales and to monitor these parameters places great demands on data processing and interpretation, as much as possible of which should be accomplished within the analysis system. To function effec- tively and efficiently as a user of information as well as simply a data generator, as in a conventional system, the increased processing demands require integrated, real-time software.390 ANALYTICAL PROCEEDINGS, DECEMBER 1988. VOL 25 Real-time programming recognises the asynchronous nature of external processes and overcomes the disadvantages asso- ciated with serial or batch processing by extensive use of data buffering and co-processing.Each stage of the data acquisition, processing and output scheme can be considered to have its own, dedicated processor unit. Concurrent and real-time programming involves mapping this multi-processor arrange- ment on to a single microprocessor system, using suitable data buffers between each module of the program. The modules, or co-routines, can relinquish control of the processor at any point during their operation and, later, resume their task at that point. The programming techniques employed will depend on whether the I/O processes are accessed by polling the data as required or periodically at pre-set times, or served via hardware linked interrupt signal lines.Real-time systems are characterised by the asynchronisation of the activities and the simultaneous demand for processing by a number of different tasks. Decisions, or rules, as to which task is serviced first in a concurrent system are designed into the software by consider- ing the priority associated with each process. Thus, high priority tasks, such as recording transient data or a systems failure, can be implemented to issue immediate hardware interrupts, leaving less time-critical processes to share the computer’s processor as required. Whilst real-time programming can lead to a more efficient use of the computer in a fully automated instrumental system, the software design and its implementation are more complex. The development of intelligent, decision making, analytical instrumentation is a rapidly expanding field of study in which the application of real-time software will play an important role.The FORTH language has been applied extensively to instrument control and real-time processing. Its advantages for these applications include its ability to support direct, machine coded interrupt routines and its small memory requirements .4 It is ideal for single-board computer units employed as components within the instrumental system. References 1. 2. 3. 4. Finkelstein, L., J. Sci. Znstrum., 1977, 10, 566. Adams, M. J., and Black, I . , Anal. Chim. Actu, 1986, 189,353. Musman, H. G., and Preuss, D., IEEE Trans. Comm., 1977, 25, 1425. Reynolds, A. T., “Advanced Forth,” Sigma Press, Wilmslow, 1986. DIFFUSIVE SAMPLING An Alternative Approach to Workplace Air Monitoring Edited by A.Berlin, Health and Safety Directorate, Commission of the European Communities. R. H. Brown, Occupational Medicine and Hygiene Laboratories, Health and Safety Executive. K. J. Saunders, B.P. Research. ROYAL SOCIETY OF CHEMISTRY The main purpose of this important new book is to: 0 0 0 0 review the state of the art of diffusive sampling techniques. stimulate the exchange of technical information. assess the suitability and range of applications for workplace air monitoring. promote the further development of this technique and its wider use. lnformat ion Services Diffusive Sampling is based on a symposium held in Luxembourg in September 1986 and organised jointly by the Commission of the European Communities and the United Kingdom Health and Safety Executive, in cooperation with the World Health Organisation and the Royal Society of Chemistry. The meeting was attended by an international audience including delegates from the USA as well as all major European countries. The book will therefore be of interest to health and safety officers and environmentalists, in industry, hospitals, governmental and academic institutions worldwide. Hardcover 500pp ISBN 0 85186 343 4 Price €45.00 ($87.00) To order or for further information, please write to: Royal society of Chemistry, Distribution Centre, Blackhorse Road, Letchworth, Herts sG6 IHN, UK, or telephone (0462) 672555 quoting your credit card details. We now accept Access/Visa/MasterCard/EuroCard. RSC Members are entitled to a discount on most RSC publications and should write to: Membership Manager, Royal Society of Chemistry, 30 Russell Square, London WClB 5DT, UK.
ISSN:0144-557X
DOI:10.1039/AP9882500389
出版商:RSC
年代:1988
数据来源: RSC
|
9. |
Training for industry |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 391-392
R. M. Wynne,
Preview
|
PDF (310KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS, DECEMBER 1988. VOL 25 39 1 Training for Industry The following are summaries of two of the papers presented at a Meeting of the Analytical Division held on May 18th, 1988, in the Scientific Societies’ Lecture Theatre, London W.1. Statistical Process Control-a Problem Solving Methodology R. M. Wynne University of Bradford Management Centre, University of Bradford, Bradford BD7 I DP At the outset, quality was defined as the ability to meet the customer requirements in the global sense, or fitness for purpose. For too long, quality has been inspected in products, which is a complete nonsense. Quality must be inherent in the design of the goods or service, in the materials and methods used. Plant and equipment must be designed to be capable of producing what is required and the operatives need to be trained to be able to control the process to give a consistent product.In essence, quality is controlled at the point of manufacture. The concept of the quality chain was introduced, where everyone in the organisation is a supplier and a customer, and there are suppliers and customers outside the organisation. A chain is only as strong as its weakest link. Let one link in the quality chain be broken and there is a quality problem that must be identified, the root cause established, and this put right once and for all. Gone are the days when one can make do and mend. Many people have an in-built fear of the word “statistics,” which basically is the systematic handling of numerical data. It was indicated that statistics boiled down to painting pictures with numbers such that recorded data were presented in a meaningful way.The systematic approach was advocated and the following techniques were discussed, which could be applied to narrowing the problem down to manageable proportions : 1. Pareto analysis-Identifies the important few from the trivial many by ranking incidence of a problem (and, in many instances, costs can be attributed). 2. Cause and effect analysis (brainstorming, Ishikawa or fishbone diagram)-Having identified a particular problem in (1) above, a structured approach in brainstorming according to the five Ps of production management (plant, people, product, programmes and process) can generate a myriad of possible causes. By quantifying these and conducting further Pareto analysis, one has homed-in on the root cause of the problem.In many instances, lesser causes are cured on the way! 3. The use of histograms and scatter diagrams. All processes are variable for one of two reasons: firstly, there is the “normal” random variation of the process, and secondly, there is variation due to assignable causes. “Fred the fiddler” was introduced, who, by turning wheels to adjust the process based on one observation, converted a process which was inherently capable to one that was out of statistical control by widening the distribution of the process so that reject products were inevitable. The need to conduct process capability studies was emphasised so that accuracy and precision could be assessed. Traditional inspection of products asks the question “Have we made it O.K.?” However, if the question “Can we make it O.K.?” (process capability analysis) is asked, followed by “Are we making it O.K.?“ (process control monitoring), and the answer to these two questions is “Yes,” then the answer to “have we made it O.K.?” can only be “Yes”! Product research and development and process evolution answer the question “Could we make it better?” (Frank Price in “Right First Time”).The use of mean and range charts was demonstrated with real examples showing processes to be out of control. True to experience, ranges were more often in control, i.e., there was precision but not accuracy, and means were widely fluctuating (lack of accuracy and, in some instances, lack of precision also). Processes go out of statistical control for assignable reasons.The mean and range chart show where to look for these assignable causes. The power of the cusum plot was also demonstrated in the detection of the point at which small changes away from the mean take place, using real data. In conclusion, it was stated that an attempt has been made to demonstrate the power of the tool kit used in getting to grips with quality problems such that after the application of the correct tool, processes could achieve highly consistent products to meet customer requirements. It was hoped that appetites had been whetted to learn more and use the knowledge gained to improve the competitiveness of industry by managing quality through the use of statistical process control with a quality management system such as BS 5750: 1987 as the driving force.Training and Career Development for Scientists Terry Gough Department of Trade and Industry, Ashdown House, 123 Victoria Street, London SElE 6RB Distribution of Scientists about 30% of the total workforce in government R & D. The Department of Trade and Industry (DTI) employs a total of There are approximately 17000 graduates employed by the 12500 people, of whom almost 2000 are scientists and government on research and development, which represents technologists. Just under half of these are directly involved in R392 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 & D. The distribution of DTI scientists is given in Table 1. Most scientists are, of course, employed in the laboratories.There are also substantial numbers in the Department’s Radiocommunications Division (physicists and radio- and telecommunications engineers). A significant amount of the Department’s work is technologically orientated and scientists are employed in the Headquarters Divisions and in the regional network. Table 1. Scientists and technologists in DTI Laboratories . . . . . . . . , . 1430 RadiocommunicationsDivision . . . . 240 Headquarters-Market Divisions . . 140 Headquarters-Service Divisions . . 80 Regional Offices , . . . . . . . 40 Total . . . . . . . . . . . . 1930 The Laboratories In the laboratories (Table 2), priority is given to work supporting the government’s statutory and regulatory func- tions, research and other work specifically for government. A limited amount of industrially relevant R & D and repayment work for industrial customers is also undertaken.Table 2. The DTI laboratories Laboratory Laboratory of the Government Chemist . . . . . . . . National Engineering Laboratory . . . . . . . . National Physical Laboratory . . Warren Spring Laboratory . . National Weights & Measures Laboratory . . . . . . . . Radio Technology Laboratory . . Total . . . . . . . . . . No. of Major area of work staff Analytical chemistry, 270 biotechnology Mechanical engineering 350 Primary standards, 550 Environmental engineering 200 information technology Standards and equipment 30 Radiocommunication, 30 for trading electromagnetic compatibility 1430 Other Areas There are opportunities for scientists to be involved in legislation and policy formulation, monitoring of projects supported by the Department, in strategic studies and in the provision of technical services and advice.Skills Required Scientists are required to develop administrative skills. They need to be able to relate technological issues to the DTI’s policy and to formulate and present ideas and initiatives in a manner that secures the understanding and support of non- scientists. They need to be good managers, to make efficient use of resources and to cope with pressures and deadlines. For those scientists engaged in R & D, posts are fluidly graded. This allows a scientist to remain in his specialised field of expertise without inhibiting his promotion to the next grade and is designed to encourage flair and initiative.The scientist must be capable of making an increasing research input as his expertise grows, and the work must be such that it is needed by the Department. Fluid grading applies up to Grade 7 (former Principal Scientific Officer) level. Thereafter, exceptional scientists (of DSc quality) are considered for individual merit promotion. The Department must still need the work to be Tbe Action Plan done. The opportunities available for career development are for- mally embodied in the Action Plan for the Career Develop- ment of Scientists and Technologists, a copy of which is given to all staff. Key points of the Plan are mobility, awareness and training. In the early part of a career, a scientist is likely to move several times within hidher laboratory and may later transfer to another part of the DTI, depending on hidher attributes, interests and the needs of the Department.Secondments, rather than permanent transfers, are also available and these can vary from a few weeks to 2 years. The Short Term Experience Posting (STEP) scheme allows a scientist to work for a limited period (1-3 months) in another part of the DTI and to return to hidher permanent post at the end of the period. The secondment may be to an area where the candidate’s technical expertise is required or to a completely foreign area of work. Secondments are usually to one of the Department’s Headquarters Divisions. The objectives are to broaden the candidate’s awareness of the work of the Department and to promote better interaction between the laboratories and other areas of the Department.Candidates contribute to the work of their host Division (usually by means of a specific project), obtain an overview of that Division and acquaint their host with the work of their laboratory. Reports on each posting are written by both host and guest and are used for monitoring the success of the scheme and for personnel management purposes. Table 3 gives some examples of work undertaken by STEP candidates in recent months. Table 3. Examples of short-term posting projects Visits to companies Market surveys Consumer safety reports Use of information technology in health care Status reports on non-destructive testing Technological forecasting Export of technology ( e . g., restrictions to communist countries) Preparation of manuals, brochures and literature Awareness Scientific staff are expected to be aware of developments in other parts of the DTI.This is encouraged by the provision of literature on current issues and seminars on the work of specific areas of the Department, on policy issues and on new technologies. Training Training takes a number of forms, covering laboratory-based specialised courses (e.g., on new instrumentation or analytical techniques) and courses provided centrally by the Department and the Civil Service College (e.g., management training and effective communication). Staff are also encouraged to under- take academic courses to obtain higher qualifications. This is usually on a part-time day-release basis, but a small number of staff are sent, at the Department’s expense, on sandwich courses leading to degrees. There are formal management development schemes that apply to Scientific Officers upward and are designed to give experience of the interaction between technology and policy work. Candidates spend two 9-month periods in Headquarters to develop administrative and managerial skills-appropriate courses are included. On completion of training the candidate returns to hidher laboratory. He/she will be given the opportunity to return to Headquarters for a permanent posting at a later date. The Senior Professional Administrative Training Scheme is for staff with the potential to take senior administrative posts later in their careers. It involves policy studies, attendance at the Civil Service College and postings to administrative work in Headquarters. The Action Plan is designed to cater for staff with a wide range of attributes and aspirations. Staff are invited to take the opportunities provided, both for their own career development and for the more efficient running of the Department.
ISSN:0144-557X
DOI:10.1039/AP9882500391
出版商:RSC
年代:1988
数据来源: RSC
|
10. |
New fields for drug and antigen targeting. Targeting of protein drugs |
|
Analytical Proceedings,
Volume 25,
Issue 12,
1988,
Page 393-395
E. Tomlinson,
Preview
|
PDF (449KB)
|
|
摘要:
ANALYTICAL PROCEEDINGS, DECEMBER 1988. VOL 25 393 New Fields for Drug and Antigen Targeting The following is a summary of one of the papers presented at a Joint Meeting of the South East Region and the Biological Methods Group held on June 9th, 1988, in the University of Sussex, Falmer, Brighton. Targeting of Protein Drugs E. Tomlinson Ciba-Geig y Pharmaceuticals, Wimblehurst Road, Horsham, West Sussex RH 12 4AB Summary Site-specific drug delivery serves to attain both the optimum pharmacological activity that a drug can have together with a reduction in any toxic events. The advent of the control of gene expression has given rise to new classes of drug molecules and to an understanding of normal and pathological processes. Endocrine-like or autocrine - paracrine-like peptidergic media- tors have very different pharmacodynamic and pharmaco- kinetic properties, each type requiring completely different approaches in their clinical application, and there are firm rationale for the site-specific delivery of the latter class using protecting carrier systems.This paper examines the site- specific drug delivery of peptidergic mediators in terms of the chronicity and position of disease, and the properties of drug - carrier systems. Introduction The clinical use of drugs relies on their pharmacodisposition and pharmacokinetics combining to give an appropriate pharmacological response, coupled with the ability of the body to detoxify itself of any drug that has generally distributed. Although it has been possible to produce drugs having these properties, not only are the disease targets becoming more difficult to attain, but also the probabilities for the discovery of acceptable molecules using the various high throughput screens often employed are becoming lower.The advent of the control of gene expression has given rise to both a plethora of new classes of molecules and to an understanding of normal and pathological processes; this is leading to new approaches in both the design and clinical use of drugs. For both conventional drugs and the new classes of protein drugs, there are firm rationale for their site-specific delivery.' It has been proposed that this can be achieved by using carriers that travel along unique biological pathways and hence serve to guide drugs to pharmacological sites of action in a protected form.Although site-specific drug delivery is still rather empirical in its practice, the literature illustrates that both mistakes and successes are positive in the use of site-specific carrier systems.2 These include their use in macrophage activation some other types of cancer chemotherapy, retroviral diseases (including AIDS - ARC), gene therapy, enzyme storage diseases, inflammation, graft versus host rejection disease and fungal infestations. Disease, Access, Retention and Timing It is evident that the basis for developing any one modality to fight a disease must be a knowledge of the disease itself [i.e., its (patho)physiology , biochemistry and temporal responsive- ness)]. Hence, apart from the disease and the drug, the further essential components of site-specific drug delivery are access, retention and timing.2.3 Control of Gene Expression Clinical medicine has the opportunity to move rapidly away from being a descriptive science to one in which decisions are based on an understanding of the mechanisms controlling normal and diseased processes.This is due to recent advances in (inter alia) molecular and cell biology, physiology, biphysics, materials science and analytical methods. In particular, the control of gene expression has many and varied consequences for the pharmaceutical community in their search for safe and effective medicines. These include the evaluation of gene families and the markers of genetic originality, understanding the pathogenesis of disease and the molecular specificity of biological action, and also the ability to produce a plethora of pure amounts of new drug classes ( i e ., both protein drugs and/or their low relative molecular mass analogues). Recombinant DNA technology has enabled some of the fundamental mechanisms that regulate the expression of genes at the level of transcription and translation to be elucidated. In particular, a dramatic increase in the availability of cloned DNA sequences and chromosomally restricted fragment length polymorphisms (RFLPs) has enabled many diseases to be studied at the genetic level. Such tools are being used to probe normal physiological functions, whether these are occurring intra- or extracellularly. This effort is being aided by sophisticated and powerful new imaging techniques such as fluorescence activated cell sorting, confocal fluorescence microscopy and tunnelling microscopy. Cell processes being elucidated include simple secretory and receptor-mediated events, both of which require signal recog- nition and feedback control.These (intracellular) processes are often mediated through leader sequences which can act in a number of ways, i.e., either through their basicity - acidity properties or via some function of their primary and secondary structures. It is argued that a knowledge of the minimum amount of structural information required to cause a particular routeing (or tropism) could be useful in the design of site-specific drugs.2.3 New Classes of Drugs The ability to produce new classes of drugs also lies at the heart of modern clinical medicine.Proteins can be produced either as drugs per se or may act as templates in the production of agonist and/or antagonist drugs. It is clear that for protein drugs little rational thought has gone into their use in terms of their optimum arrival at various sites of action. The key feature is whether they are able to act systematically or whether they are the mimics of endogeneous molecules that are produced to act locally. Current dogma is that protein drugs have an element of self-targeting in their structure and hence no special provision needs to be made to ensure that they arrive at their site(s) of action. Is this true? Clearly, endocrine mediators are produced endogeneously to act over long distances from their site of manufacture; also, they are stable in blood and their size is no hindrance to their need to extravastate.Conversely, autocrine- or paracrine-like mediators are endogeneously produced to act locally. Also, these types of mediator are unstable in blood394 ANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 (generally very low concentrations are found in the blood pool), they act between neighbouring cells and are rapidly catabolised; further, they have an inefficient and widespread disposition, an extremely variable and inefficient ability to extravasate and need, therefore, a lymph to plasma ratio greater than 1. In addition, they have a specificity which is due to their local release and action, and it is known that some of these types of molecules can produce both effects on the same cells when these are at different stages of their life cycle,4 and often have an extremely complex pharmacology.5 Clearly, the production and use of this latter class of molecules is not recommended unless some means can be found (e.g., by using a carrier system) to adjust their pharmacodisposition (and stability).Hence, the key issue with such mediators is the differentiation between local action and the risk of action on non-target cell populations (although, for example with interferons, the latter leads to unpleasantness, with inter- leukin-2 it can lead to life-threatening sequelae). Hence, in the site-specific delivery of these mediators, one needs to consider the issues of chronicity in the activation of cells (including their temporal localisation and responsive- ness), and as such agents may be acting as part of a polymediator cascade of events, and also the staging sequence through which they act.All the features described above for endocrine- and auto- paracrine-like mediators serve to illustrate that as we begin to know more and more about disease, then the way in which we apply drugs clinically is often shown to be very naive. It follows that we often need to rethink our approach to drug application, both at the clinical stage but also, and perhaps even more importantly, at the drug discovery stage when, after cleverly designing these new classes of drugs, we still administer them to inappropriate test models at inappropriate rates, amounts, stagings and times. Access Numerous instances of disease can be cited in which simply for a drug to attain a site of action located in a poorly accessible region would be beneficial, as for example with many intracellular infections, diseases of the central nervous system, diseases of the immune system, cancerous states, some cardiovascular diseases and haemopoietic and arthritic diseases.Consider, for example, AIDS, where more and more attention is being placed on the viral regulatory genes of the human immunodeficiency virus as being the important targets for chemical intervention. The question of pinpoint access is clearly vital here. In addition, it is important that many of the intricate dosing regimes and the high doses of drug currently applied are frequently needed because of a poor perfusion of the site of action coupled with an inappropriate pharmacodisposition of the drug.As the latter may lead to untoward metabolism and interaction with the host, often all of these effects combine to give rise to deleterious effects. Site-specific carriers serve to operate via the use of their innate biological pathways and their ability to protect the drug. In addition, they should also be able to make the drug available at the site of action in the right amount, at the right rate and at the right time.236This may need to be effected by retention of the carrier at the site of action, and via some type of trigger which enables the drug to be released due to either a normal or pathological event. Hence, site-specific drug delivery can be defined as achieving the maximum potential intrinsic activity of drugs by optimising their exclusive availability to their pharmacological receptors in a manner that affords protection to both the body and the drug alike.* Carriers Although Gardner7 has taken a broad view of drug targeting as encompassing simple prodrugs, which are generally available systematically and are specifically activated at sites of action, site-specific carriers are defined here as soluble and particulate macromolecular carriers, which, although they behave differ- ently in the body, may need to have, variously, the properties of passage through epi- and endothelial barriers, carriage or transport through the body (including intracellular and/or transcellular transport) and then interaction with the target cells.Their use also implies protection of the drug and the body from one another until the site(s) of action is reached, avoidance of any pharmacological interaction with non-target cells and the release of therapeutically relevant amounts of drug at the required modality and frequency followed by excretion of the carrier and the drug.(Drug availability can be due to simple passive events, such as diffusion from a carrier, or active processes, including enzyme degradation.) The access of carriers to various sites is due to diffusion, convection and participation in various passive and receptor- mediated cell trafficking events.2.3 However, because of their size, one should realise that often they will be compartmental- ised. For example, it is unlikely that particulate carriers greater than 20-50 nm in diameter will be able to extravasate through anything other than discontinuous or damaged endothelia.Table 1 gives the numerous soluble and particulate carriers that have been suggested for effecting site-specific drug delivery. Table 1. Types of carrieriligand Soluble carriers- Synthetic polymers Antibodies (fragments thereof) Hybrid fusion proteins (genetically defined) Lipid carriers Proteinaceous carriers Fusogenic carriers Cell carriers Viral - retroviral carriers Synthetic particles Particulate carriers- Many attempts have been made to control the biological dispersion of a carrier by linking it to some form of ligand that will be recognised by a particular normal or abnormal feature of the body. This ligand may be simple, e.g., a sugar moiety, or more complicated, such as a fragment of natural ligand (e.g., of interleukin-2 or of an antibody). The latter are an interesting breed, and can include anti-idiotypic antibodies or even the synthetic antisense antibodies that have been produced recently . Conclusions Urquhart and Nicholls8 have discussed the three main phar- macodynamic factors that have been shown to have a marked influence on drug action. These are: ( i ) duration of the drug-free interval; (ii) attainment of critical thresholds in plasma concentrations; and (iii) the rate of increase in the drug plasma concentration. These findings have been arrived at by charting the behaviour of conventional drugs using various routes and modes of administration. Clearly, for site-specific systems, these observations will be altered owing to the altered kinetics and dynamics of drug distribution.As more becomes known about (patho)physiology and the local biochemistry of disease, and as molecular biology tools extend our present ability to manipulate and control expres- sion, and protein engineering tools permit the de novo synthesis of therapeutic systems that have defined functions of transport, protection , spatial orientation and temporal release, so a revolution in the management of disease will occur. New classes of targeted drugs will appear (e.g., hybrid fusion proteins). Gene therapy may become technically possible.9 As a clearer understanding of how targeting can be effectedANALYTICAL PROCEEDINGS, DECEMBER 1988, VOL 25 395 emerges, a singular challenge for the pharmaceutical industry will be how to assess the potency and toxicity of its existing store of candidate drugs in order to arrive at conclusions as to which molecules are suitable for drug targeting.It is important, therefore, that objective analytical methods are arrived at; recent developments in this direction are encouraging.10 Also, there will need to be a realisation that site-specific drugs are new chemical entities and hence will require full-scale testing for their safety and efficacy. The use of human-specific processes to effect site access, etc., will pose unique problems in this testing.2.11 References 1. 2. Poste, G., and Kirsh, R., Biotechnology, 1983, 1, 869. Tomlinson, E., Adv. Drug Delivery Rev., 1987, 1, 87. 3 . 4.5 . 6. 7. 8. 9. 10. 11. Tomlinson, E., in Tomlinson, E . , and Davis, S. S . , Editors, “Site-specific Drug Delivery,” Wiley, Chichester, 1986, Melchers, F., and Anderson, J., Ann. Rev. Immunol., 1986,4, 13. Talmadge, J. E., Trends Pharmacol. Sci., 1986, 277. Mclntosh, R. P., Trends Pharmacol. Sci., 1984, 5 , 429. Gardner, C. R., Biomaterials, 1985, 6, 153. Urquhart, J., and Nicholls, K., World Biotech., Rep. 2, Anderson, W. F., Blaese, R. M., Nienhuis, A. W., and O’Reilly, R. J., “Humane Gene Therapy: Preclinical Data Document ,” unpublished document submitted to the Human Gene Therapy Subcommittee, RAC (NIH), April 24th, 1987-available from the Office of Recombinant DNA Activi- ties (NIH). Hunt, C. A., MacGregor, R. D., and Siegel, R. A., Pharm. Res., 1986, 3, 333.Schiff, J. M., Fisher, M. M., Jones, A. L., and Underdown, B. J., J . Cell Biol., 1986, 102, 920. pp. 1-27. pp. 321-331. Analytical Applications of Spectroscopy Edited by C.S.Creaser, Uniuersity of East Anglia and A.M.C. Davies, Insfifufe of Food Research, Norwich This new book provides a ‘ S tate-of-the-Art’ review of the applications of the major spectroscopic techniques and will prove invaluable to researchers involved in this form of analysis. The book provides wide-ranging coverage of recent developments in analytical spectroscopy, and in particular the common themes of chromatography - spectroscopy combinations, Fourier transform methods and data handling techniques. Each section includes a review of key areas of current research, written by spectroscopists who have made major contributions in their respective disciplines, as well as short reports of new developments in these fields. These common themes have played an increasingly important part in recent advances in spectroscopic techniques and emphasise the multidisciplinary approach of present research. 502 pages ISBN 0 85186 383 3 Price U7.50 ($99.00) ROYAL SOCIETYOF CHEMISTRY Inforpation services To order or for further information, please write to: Royal Soaety of Chemistry, Distribution Centre, Blackhorse Road, Letchworth, Herb sG6 IHN, UK. or telephone (0462) 672555 quoting your credit card details. We now accept Access/Visa/MasterCard/EuroCard. RSC Members are entitled to a discount on most RSC publications and should write to: The Membership Manager, Royal society of Chemistry, 30 Russell !&pare, London WC1B 5DT, UK.
ISSN:0144-557X
DOI:10.1039/AP9882500393
出版商:RSC
年代:1988
数据来源: RSC
|
|