|
1. |
Contents pages |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 040-041
Preview
|
PDF (124KB)
|
|
摘要:
RSC ANALYTICAL DIVISION NORTH WEST REGION A Meeting on DATA MANAGEMENT IN ANALYTICAL LABORATORIES will be held at The Lord Daresbury Hotel, Warrington on March 13th, 1985 The speakers at this meeting will be D. Betteridge (BP Research Centre, Sunbury), E. C. P. Gillyon (Perkin-Elmer Ltd., Beaconsfield), B. Bollen (Hewlett-Packard Ltd., Winnersh), G. Sherlock and C. Black (Trivector Systems International Ltd., Sandy), P. J. Whittle (North West Water Authority, Warrington) and J. R. Salmon (E. R. Squibb and Sons, Moreton). It is hoped that a demonstration and exhibition of laboratory data management systems will be possible and time has been allocated to this in the schedule. For further information on the meeting contact Dr. A. Mathias, Research Department, ICI Organics Division PLC, Blackley, Manchester M9 3DA. THINKING OF BUYING A FLAMEA TOMICABSORPTION SPECTROPHOTOMETER? Before committing yourself, read the Analytical Methods Committee's report on the subject in Analytical Proceedings February, 1984, p. 45 and evaluate the available instruments by using the AMC comparison procedure. If you do not have February Analytical Proceedings, or do not wish to write in the one that you do possess, reprints of the report may be purchased from: Dr. J. F. Tyson, Department of Chemistry, Loughborough University of Technology, Loughborough, Leicestershire, LEI 1 3TU, price f2.00 ($5.00).
ISSN:0144-557X
DOI:10.1039/AP98421FX040
出版商:RSC
年代:1984
数据来源: RSC
|
2. |
Back cover |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 042-042
Preview
|
PDF (121KB)
|
|
摘要:
Nuclear Magnetic Resonance 7 Specialist Periodical Reports r Senior Reporters E W Abet F G A Stone This twelfth volume in the series reviews the literature published on the subject during 1982. Vol. 13 This volume reviews the literature published on the subject between June 1982 and May 1983. Brief Contents: Theoretical and Physical Aspects of Nuclear Shielding; Applications of Nuclear Shielding; Theoretical Aspects of Spin - Spin Couplings; Applications of Spin-Spin Couplings; Nuclear Spin Relaxation in Fluids; Solid State NMR; Multiple Resonance; Natural Macromolecules; Synthetic Macromolecules; Conformational Analysis; Oriented Molecules; Nuclear Magnetic Resonance in Heterogeneous Systems; "This volume and its predecessors are useful because references to papers on similar techniques or on similar systems are organised in a logical format.The interested reader can easily find references to papers of immediate interest as well as to papers on related topics. The editor and his reporters are to be commended for the manner in which they have organised their material. . . " -G R Miller, J Am Chem SOC , reviewing Vol 1 1 Hardcover 41 5pp 0 851 86 362 0 €68.00 ($122.00) RSC Members €38.00 f rorn The Royal Society- of Chemistry RSC Marnbar. should sand thaw oidors to Tha Royal Socuty 01 Chomistrv The Mornbarmhw Ollicer P Russall Sauara Dtslribution Cantra Biackhorso Road Latchworth. Mort, SG6 1HN. England Londqn WClBSDT V O l . 12 Organometallic Spectroscopic Chemistry Properties of Inorganic and Organometallic Compounds Brief Contents: Group I : The Alkali and Coinage Metals; Group II: The Alkaline Earths and Zinc and its Congeners; Group I l l : Boron; The Carbaboranes, including their Metal Complexes; Aluminium, Gallium, Indium and Thallium; Group IV: The Silicon Group; Group V: Arsenic, Antimony and Bismuth; Metal Carbonyls; Organometallic Compounds Containing Metal-Metal Bonds; Substitution Reactions of Metal and Orgailometal Carbonyls with Group V and VI Ligands; Sigma-Bonded Organometallic Compounds of Transition Elements of Groups II IA-VI IA; Complexes Containing Metal-Carbon a-Bonds of the Groups Iron.Cobalt and Nickel; Metal- Hydrocarbon n-Complexes; n-Cyclopentadienyl.n-Arene and Related Complexes; Homogeneous Catalysis by Transit ion - meta I Complexes; Diffraction Studies of Organometallic Compounds; "This is basically a no-nonsense book, devoted to complete coverage with minimum supplementary comment. It achieves these goals admirably." J H Stocker. J Am Chem S O C . reviewing Vol. 10 Hardcover 51 6pp 0851 86 601 8 €82.00 ($147.00) RSC Members €41 .OO Senior Reporters: G. Oavidson E. A. V. Ebsworth This volume reviews the recent literature published up to late 1982. Non RSC Mmrnbon Tha Royal Socwtv 01 Cherniatrv Vol. 16 Brief Contents: Nuclear Magnetic Resonance Spectroscopy; Nuclear Quadrupole Resonance Spectroscopy- Rotational Spectroscopy; Characteristic Vibrations of Compounds of Main- group Elements; Vibrational Spectra of Transition-element Compounds; Vibrational Spectra of Some Coordinated Ligands; Mossbauer Spectroscopy; Gas-phase Molecular Structures Determined by Electron Diffraction: "Very few practising chemists can be unaware of this series and the material it covers. . . as a reference source, this work must find a place in the library of every active inorganic chemistry department." ~ K R Sneddon. J. Organometallic Chem. reviewing Vol 14 Hardcover 379pp 0 85186 143 1 €78.00 ($140.00) RSC Members €39.00.
ISSN:0144-557X
DOI:10.1039/AP98421BX042
出版商:RSC
年代:1984
数据来源: RSC
|
3. |
Analytical chemistry trust fund |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 467-467
Preview
|
PDF (94KB)
|
|
摘要:
ANPRDI 21(12) 467-520 (1984) December 1984 Hon. Secretary R. Sawyer Analvtical Proceedinas U - Proceedings of the Analytical Division of The. Royal Society of Chemistry AD President P. G. W. Cobb Hon. Treasurer D. C. M. Squirrel1 Hon. Assistant Secretary D. I. Coomber, O.B.E. Hon. Publicity Secretary Dr. J. F. Tyson, Department of Chemistry, Loughborough University of Technology, Loughborough, Leicestershire, LE11 3TU Secretary Miss P. E. Hutchinson Editor, Analyst and Analytical Proceedings P. C. Weston Senior Assistant Editors Assistant Editor Mrs. J. Brew, R. A. Young Ms. D. Chevin Publication of Analytical Proceedings is the responsi- bility of the Analytical Editorial Board: J. M. Ottaway (Chairman) L. S. Bark L. C. Ebdon A. G. Fogg *P. M. Maitlis A. C. Moffat "Exofficio members B.L. Sharp J. D. R. Thomas A. M. Ure *P. C. Weston All editorial matter should be addressed to: The Editor, Analytical Proceedings, The Royal Society of Chemistry, Burlington House, Piccadilly, London, W1V OBN. Telephone 01-734 9864. Telex 268001. Advertisements: Advertising Department, The Royal Society of Chemistry, Burlington House, Piccadilly, London, W1V OBN. Telephone 01-734 9864. Analytical Proceedings (ISSN 0144-557x1 is published monthly by The Royal Society of Chemistry, Burlington House, London, WIV OBN, England. All orders, accompanied by payment, should be sent to The Royal Society of Chemistry, The Distribution Centre, Black- horse Road, Letchwonh, Herts., SG6 1HN. England. 1984 Annual Subscription price if purchased on its own: UK f53.00, Rest of World €56.00, US $106.00, including air speeded delivery.Air fseight and mailing in the USA by Publications Expediting Inc., 200 Meacham Avenue, Elmont, N.Y. 11003. USA Postmaster: Send address changes to: Analytical Proceedings, Publications Expediting Inc., 200 Meacham Avenue, Elmont, N.Y. 11003. Second class postage paid at Jamaica, N.Y. 11431. All other despatches outside the UK by Bulk Airmail within Europe, Accelerated Surface Post outside Europe. PRINTED IN THE UK. 0 The Royal Society of Chemistry, 1984. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form, or by any means, electronic, mechanical, photographic, recording, or otherwise, without the prior permission of the publishers. Analytical Chemistry Trust Fund SAC Studentships The Trustees invite proposals from supervisors for research projects likely to make important contributions to the advancement of analytical chemistry in the UK and which are suitable for well qualified postgraduate students.Projects will be assessed particularly on the originality of the research proposed. Applications may be submit- ted by research supervisors, who must be mem- bers of the Analytical Division of the Royal Society of Chemistry of at least 2 years' standing. The allocation of the limited amount of money available for proposals for projects to start in the Autumn term of 1985 will be considered early in that year when a tentative award may be made, subject to the Trustees being satisfied by the Summer of 1985 that a student acceptable to them is available. The value of a Studentship is between f1905 and f3170 per annum minimum, according to circumstances, plus fees up to UK rates. Application Regulations for the Studentships can be obtained from the Secretary, Analytical Division, Royal Society of Chemistry, Burlington House , London, W1V OBN. The closing date for applications is January 16th, 1985. 467
ISSN:0144-557X
DOI:10.1039/AP984210467a
出版商:RSC
年代:1984
数据来源: RSC
|
4. |
Safety in Analytical Laboratories. The Gas Safety (Installation and Use) Regulations 1984 |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 468-468
Preview
|
PDF (129KB)
|
|
摘要:
468 SAFETY IN ANALYTICAL LABORATORIES Anal. Proc., Vol. 21 Safety in Analytical Laboratories This article continues the series of reports on aspects of safety of particular interest to analytical chemists. It is hoped that these articles will provide a forum for further discussion, and correspondence on the individual articles and on all safety matters is invited. This series is written by outside contributors and views expressed in the articles are not necessarily those of the Royal Society of Chemistry. The Gas Safety (Installation and Use) Regulations 1984 The new Regulations are concerned with work on gas fittings in domestic and commercial premises where gas is supplied to those premises through pipes. They apply to all users of gas, whether it is supplied by the British Gas Corporation or another supplier, and they cover all premises except for mines and factories, which are dealt with elsewhere.The new Regulations place obligations on four categories of people: those installing gas fittings, those carrying out work on gas fittings (including, for example, those servicing gas appliances), gas suppliers (including in some cases landlords supplying gas to their tenants) and those respon- sible for premises, for example the manager. The installers are required to ensure that the fittings they install are sound and protected against undue risk of damage. Emergency con- trols must work and be marked in a specified manner. Meters must not be placed in certain locations unless they are protected against fire, and primary meters must be protected by gas governors.Pipes must be protected against failure caused by movement, and pipes in “common parts” of buildings must be clearly marked. Appliances must be in a safe condition before they can be installed; they must be readily accessible for maintenance and they must be tested. Flues and ventilation must be suitable for the appliance. Those carrying out work on gas fittings must be competent. They must see that fittings are not left unattended and that no source of ignition is used when gasways are exposed. Pipes must be tested after being worked on. The gas suppliers must ensure that emergency controls and emergency notices on meters are provided and that disused, but still live, services are marked and finally disconnected at the main.Hilger Spectroscopy Prize The Hilger Spectroscopy Prize is an annual award for young spectroscopists who are under 35 years of age at the end of the year for which the prize is awarded. The prize may be awarded for any They must also ensure that, if work is carried out on a pipe or an appliance is installed at a time when gas is not being supplied to the premises, the pipe is purged or the appliance is tested before supplies commence. Those responsible for the premises must ensure that any electrical cross-bonding is carried out if an installation pipe has been connected to a meter, and that marked pipes remain so marked. They must take all reasonable steps to shut off the supply of gas if an escape of gas in the premises is brought to their notice, and they must inform the gas supplier if the gas continues to escape.To summarize, the new Regulations require gas fittings to be installed and used in such a way that the public are protected, as far as is practicable, from personal injury, fire, explosion or other damage arising from the use of gas supplied through pipes. They revoke Parts 111-VI of the Gas Safety Regulations 1972, the remainder of which, subject to some amendments, remain in force. The new Regulations also amend the Gas Safety (Rights of Entry) Regulations 1983 by increasing the maximum penalty for breach of both Regulations to f2000. Gas safety responsibilities were transferred from the Secretary of State for Energy to the Secretary of State for Employment on February lst, 1984. The Regulations will be enforced by the Health and Safety Commission and Executive. Parts which do not require advance action by suppliers and installers will come into force on November 24th, 1984. Those which require pre- paration, such as revised provisions to ensure that there are clear and permanent markings on how to operate emergency controls, will come into force on February 24th, 1985. Copies of the Regulations (SI 1984/1358) are available from HMSO, price f2.70 plus postage. contribution to analytical atomic spectroscopy. The 1984 prize has been awarded to Dr R. D. Snook, of Imperial College of Science and Tech- nology, London.
ISSN:0144-557X
DOI:10.1039/AP9842100468
出版商:RSC
年代:1984
数据来源: RSC
|
5. |
Portable Analytical Chemistry |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 469-477
M. J. Gibson,
Preview
|
PDF (800KB)
|
|
摘要:
December, 1 984 PORTABLE ANALYTICAL CHEMISTRY 469 Portable Analytical Chemistry The following are summaries of three of the papers presented at a Joint Meeting of the Midlands Region and the Special Techniques Group held on March 28th, 1984, at The University, Birmingham. Portable Gas Monitors for Use in Coal Mines M. J. Gibson Instrument Research Branch, NCB, Mining Research and Development Establishment, Ashby Road, Stanhope Bretb y , Burton - on - Trent, Sta ffo rds hire, D E 1 5 0 Q D Many different gases have been detected in coal mine air. Some of these are harmless, but many are toxic and a few are flammable. There is therefore a need, and in some instances a legal requirement, to monitor certain gases. The more important ones are as follows. CH4 The main constituent of firedamp, which is released from coal seams during mining activities. The LEL is 5% in air.Various references exist in regulations to concentrations between 0.2 and 2.0%, the main ones being electricity cut-off at 1.25% and men withdrawn at 2.0%. O2 An oxygen deficient atmosphere is known as blackdamp. Ventilation regulations require oxygen concentration of not less than 19%. CO This gas is produced by diesels, shot-firing and the oxidation of coal. This last process can lead to spontaneous combustion, for which a rise in the CO level is an indicator. Its TLV is 50p.p.m. C 0 2 This is produced by diesels and shot-firing; it is also a cause of blackdamp and a minor constituent of firedamp. Its TLV is 0.570, but ventilation regulations allow a maximum concentration of 1.25%.H2 Produced in battery charging stations, this gas is also produced in small amounts by the spontaneous combustion of coal and is found in firedamp. The LEL is 4% in air. NO This is produced by diesels and shotfiring. The TLV is 25 p.p.m. NO2 This is produced by diesels and shotfiring. The TLV is 5 p.p.m. SO2 This is produced by diesels and shotfiring. The TLV is 2 p.p.m. H2S This gas is produced by shotfiring and chemical activity in some strata. The TLV is 10p.p.m. Various other gases, including organic vapours, ammonia, hydrogen chloride, etc., can be present in small amounts or as a result of fires, particularly where conveyor belting is involved. General Instrument Requirements Obviously a portable instrument should not be too heavy or bulky, but in mining it is advantageous for a gas detector/monitor to be easily clipped to a belt or pocket.Electrical power should preferably be supplied by re-chargeable batteries capable of powering the instrument for at least an 8-hour shift. However, a major requirement for any such instrument for use in coal mines is that it must be certified intrinsically safe by HM Electrical Inspectors of Mines and Quarries. This requirement means that any electrical fault cannot result in the ignition of any explosive methane - air mixture. Flammable gas detectors must also have M and Q methanometer approval. In addition, the instrument must be approved under the NCB Acceptance Scheme, which covers operational safety, the materials used and indeed the desirability of employing the instrument in the first place! Physically, mine instruments need to be robust and protected against the ingress of dust and water (splashing rather than immersion). Performance Requirements There is little to be gained here by discussing the detailed specification for each type of instrument but some general points can be made.In most instances, an accuracy of +lo% of the reading is acceptable, one exception being oxygen where +0.5% (or k0.5 kPa) of oxygen is required. However, this degree of accuracy should be maintained over the temperature range 0-40 "C in humidities up to 100% RH and over a service interval of at least 2 weeks, preferably longer. In general, the range(s) and any alarm level(s) reflect the hazardous nature of the gas in question. While this is also broadly true of the complementary range of fixed instruments which continuously470 PORTABLE ANALYTICAL CHEMISTRY Anal.Proc., Vol. 21 monitor conditions at particular sites, there are instances where their purpose may differ in some way. An important example of this is carbon monoxide, where fixed monitors are looking for small changes in ambient carbon monoxide levels and therefore 1 p.p.m. resolution and 0-50 p.p.m. 'range will usually be adequate, but portable carbon monoxide monitors are often used to pinpoint the source of carbon monoxide during an incident and therefore a range of 6500p.p.m. or even higher may be needed. Response times (to 90% of step change) should be as short as possible, but up to 10s is currently acceptable for methane and oxygen deficiency, and longer times for other gases, e .g . , 30-40 s for carbon monoxide. Ideally, a gas monitor should be specific to the gas being measured but this is rarely the case. Any particular problem in this respect will be noted when the instruments are described below. Current Instruments For methane detection, the flame safety lamp is still legally required to be carried by colliery officials. The gas concentration is estimated from the size and shape of the flame and so the accuracy of measurement depends on the skill and experience of the operator and cannot be expected to be better than &0.5% of methane. The range of the most common flame lamp is -1-207'0 of methane. More accurate portable methanometers are based on the pellistor, a catalytic sensor invented at SMRE over 25 years ago1 but subject to considerable development since then.* The sensing element is a small alumina bead about l m m in diameter, enclosing a platinum heater coil and coated with a palladium catalyst.The device is heated to 500-550 "C and any methane (or any other combustible gas) present is catalytically burnt on the surface. The heat produced raises the temperature of the bead, and hence that of the heater coil, the change in the resistance of which is sensed by a Wheatstone bridge arrangement. In order to minimise power consumption, the pellistor used in mine instruments is a smaller version of that used in surface industry. One reason for the continued use of the flame lamp is that it will also detect oxygen deficiency; the flame is extinguished below 14-17% of oxygen.However, more accurate hand-held oxygen deficiency monitors are now available incorporating electrochemical cells. Early versions had cells with non-porous membrane diffusion barriers, which presented problems due to their high temperature coefficient. More recently, an oxygen cell has been developed by City Technology Ltd. under an NCB contract.3 The latest version has a fuel cell type cathode (sensing), a porous lead anode (counter) and an alkaline electrolyte, but the novel feature of the design is the use of a gaseous diffusion barrier which produces a greatly reduced temperature coefficient. The sensors are made using standard battery technology in a "RR" size can. On the spot measurement of the remaining gases has for many years been performed using chemical stain tubes, plus the occasional use of canaries as high level carbon monoxide detectors.The accuracy of the colorimetric method is typically 225%. While this may be acceptable for those gases which are measured infrequently, something better has been required for carbon monoxide, particularly in mines prone to outbreaks of spontaneous combustion. In recent years, several electrochemical cells have been developed for carbon monoxide. One such cell, using precious metal electrodes and an acid electrolyte, is being developed by City Technology under an NCB contract and, as in the oxygen sensor, this has a gaseous diffusion barrier to minimise the temperature c0efficient.j There are different versions of this carbon monoxide sensor, depending on the range to be measured. Instruments using these cells are currently being evaluated and results so far are encouraging, although there is some concern over their cross sensitivity to some gases, particularly hydrogen, which can lead to over reading in certain circumstances.Work on these cells is therefore continuing. Future Developments For methane, the pellistor is generally regarded as adequate, and the use of activated carbon cloth offers sufficient protection against poisoning so recently developed poison resistant varieties are not being considered at present. Although a lower power sensor would be useful, such a development is not being accorded high priority. Electrochemical sensors appear to be satisfactory for oxygen and carbon monoxide and, in view of the increasing use of free-steered diesel vehicles in mines, an electrochemical sensor for NO, is in the early stages of development at City Technology Ltd.for the NCB. As regards other gases, there is no immediate need for work on new sensors for general NCB use, with the possible exception of hydrogen because of its effects on other sensors. Hence, anDecember, 1984 PORTABLE ANALYTICAL CHEMISTRY 471 electrochemical hydrogen sensor is being developed by City Technology Ltd. Hydrogen is also a possible indicator for automatically distinguishing between different sources of combustion products. However, there is some interest in on-site mine air analysis. During underground fires and incidents of spontaneous combustion, it would be useful to ascertain the gases present in order to establish which, in addition to hydrogen, might be causing electrochemical carbon monoxide sensors to over-read, which gases cause experimental semiconductor fire detectors to respond (semiconductor sensors have not been mentioned until now because they do not yet feature in any portable instruments) and which might be better indicators of particular forms of combustion.Thoughts at the moment turn towards a portable gas chromatograph with, perhaps, a semiconductor sensor, but little work has been done on this so far. In conclusion, the last decade or so has seen major advances in gas detection instrumentation in coal mines and it seems likely that within the next decade instruments will be available to colliery personnel which will satisfy virtually all likely requirements in this field.The author wishes to thank the Director of the Mining Research and Development Establishment for permission to give this paper. The opinions expressed in the paper are his own and not necessarily those of the National Coal Board. References 1 . Br. Pat., No. 892,530. 2. 3. 4. Br.- Pat., No. 2,094,005. Br. Pat., Nos. 1,516,039, 1,549,640, 2,034,893, 2,078,378 (for example). Br. Pat., Nos. 1,571,282 and 2,049,952. Chemical Analysis on Board Chemical Tankers David R. Owen 24 Downsview Crescent, Uckfield, Sussex Firstly what is a Chemical Tanker? The answer to that question is that it is the best piece of equipment that technology has designed for the safe and efficient transport of bulk liquid chemicals.“Liquid” means that they are liquid at normal pressure and temperature. In designing the Chemical Tanker consideration has been taken of the following factors. Survival Parameters have been laid down from accident analysis of historical records as to the extent of damage from collisions, and groundings, and these criteria have been built into the vessel. When interpreted they relate to the size of compartments, the angle of list, and trim by the head or stern. Containment of cargo the ship landing heavily alongside a dock wall. marine environment. This is related to damage resulting from collision or grounding and minor side damage resulting from The requirements are linked to the hazard presented by each chemical if they were released into the Three degrees of physical protection are employed.The highest standard of protection, type 1, is required for substances considered to have the greatest environmental hazard, with reduced standards, types 2 and 3, for substances of progressively lesser hazard. Examples of these substances are: type 1, chlorosulphonic acid and phosphorus; type 2, acetone cyanohydrin and ethylene dichloride; and type 3, carbon tetrachloride and furfural. Since the inception of these design criteria, in 1972, the International Conference on Marine Pollution (1973) and the International Conference on Tanker Safety and Pollution Prevention (1978) have taken place, which may, in the near future, result in an upgrading in the categorisation of the types of substances and therefore the protection required.Control of Vapours So that there is not a continuous release of cargo vapour around the decks of the ship each tank is fitted with a pressure - vacuum relief valve. The outlet of the pressure side of the valve is led to a riser,472 PORTABLE ANALYTICAL CHEMISTRY Anal. Proc., Vol. 21 which has to be a minimum height above the deck and so arranged that the vapour exits upwards in an unimpeded jet. In instances where high toxicity cargoes are being handled and the terminal has the facility, vapour return or vent stacks can be utilised to prevent accumulation of vapour during loading or discharging. Gauging (Measurement of Cargo) Whilst an ullage port may be a suitable means of measuring a non-toxic or low toxicity cargo, the same cannot be said of the moderate or high toxicity cargoes.The types of gauging are related to these hazards as follows: open gauging, e.g., sodium hydroxide; restricted gauging, e.g., furfural; and closed gauging, e.g., acrylonitrile. Vapour Detection the ship is required to carry the applicable instruments and equipment. This is related to the hazard presented by the product, either flammability or toxicity, or both, and Cargo Tanks A 25 000 ton deadweight vessel can have approximately 42 cargo tanks of differing size, construction and linings. Sizes range from 200 m3 to 2000 m3. Most of the construction in terms of the strength of the vessel, i . e . , longitudinals and frames, are kept out of the tanks to assist draining and tank cleaning. Some tanks may be built of stainless-steel plates or stainless-steel cladding over mild steel.Other tanks are shot blasted and epoxy, polyurethane, phenolic or zinc silicate coatings applied over the blasted plate. Expoxy, polyurethane and phenolics These are usually three or four coat systems with a dry film thickness of 250-300 pm; the pH range is 6.0-9.5. The finished coating is a smooth, shiny surface and lends itself to easy cleaning. It is ideal for vegetable, animal and lubricating oils. Zinc silicate This is a one-coat system. The dry film thickness is 75-100 pm and the pH range is from 5.0-9.0. It provides a rough surface and is suitable for high grade water-white solvents; it is also a sacrificial coating and is damaged by chlorides. Stainless steel Resistance to chemicals is achieved by maintenance of an oxide film on the steel.Some aggressive chemicals, e.g., phosphoric acid, break down the oxide film, and after discharge stainless steel must be re-passivated using a dilute solution of nitric acid. It has a smooth surface and is easy to clean, but is damaged by chlorides. With 42 cargo tanks, theoretically the ship is able to carry 42 different products, which could lead to a dangerous situation in terms of an accidental admixture of reactive chemicals. In consideration of this, reactive chemicals must be segregated in the ship as follows. Tanks containing reactive cargoes must be separated from one another by a void space, cofferdam, pump room, an empty tank or a tank containing a mutually compatible cargo, and also have separate piping systems and separate venting systems. This then, briefly, is a chemical tanker.Where does the chemical analysis come in? It plays a part in two ways: determining tank cleanliness; and safety. Tank Cleanliness The initial capital investment in a chemical tanker is $68 million for a 30 000 ton deadweight ship. In order to recover the cost of the ship and make a profit the ship has to be versatile and carry almost any liquid. This means that from voyage to voyage the same tank may contain a multitude of products (with consideration to coatings and reactivity) from vegetable, animal or mineral oils to high-grade solvents. Therefore, in between voyages the tank has to be cleaned.December, 1984 Cleaning PORTABLE ANALYTICAL CHEMISTRY 473 Whilst the process of cleaning is a science (it has well founded principles), involving pressure to disrupt the surface forces of liquids - films, heat to vaporise or melt products and chemical additives to emulsify or saponify, it is not however an exact science.Water pressure may not always be that required, the temperature may not be just right and the chemical additives may not have been added in the exact amounts. After cleaning, therefore, the tank must be tested for entry (safety or personnel) and the surface of the tank tested for cleanliness (quality control). An example of this is a tank having contained an aromatic hydrocarbon such as benzene, xylene or toluene, which is required on the next voyage for methanol. The cleanliness specification for methanol is less than 1 p.p.m.for chlorides and less than 2 p.p.m- for hydrocarbons. A typical procedure for cleaning would be as follows. Firstly, ventilate until gas free; secondly, hot salt water wash (mechanical tank cleaning machines) for 2.5 h; thirdly, ventilate to gas free again; fourthly, fresh water wash and distilled water wash; fifthly, educt and mop. Between steps 4 and 5 it is necessary for someone to enter the tank to test the surface for chlorides and hydrocarbons, but first of all the atmosphere in the tank must be tested to ensure that it is safe for entry. These tests include flammability, oxygen deficiency and toxicity. It is. however, toxicity that we shall deal with here. Atmosphere Testing By far the most suitable equipment for shipboard use for determining the atmosphere is the Draeger pump together with the applicable gas detection tube for the vapour to be measured, in this instance benzene.The operation of the equipment is to break off the ends of the tube, insert the tube into the pump and take the required number of strokes with the pump. The indicating layer is discoloured, and the length of the discoloration can be read off in parts per million. The tube is made up as follows: pale grey pre-cleanse layer; reagent (formaldehyde and sulphuric acid). A positive reading gives a colour change on the indicating layer to brown. The reaction is 2 0 + HCHO - 0 C H 2 G + H20 Benzene Formaldehyde O C H 2 a - + 2H2S04 + 0 CH 0 0 + 3H20 + 2s02 p-Quinoid compound (light brown colour) Surface Testing Having ascertained that the tank is safe for entry the Chief Officer enters the tank to test a number of places in the tank for chlorides and hydrocarbons using “bucket chemistry.” This is the second piece of analytical chemistry.Hydrocarbon Test The Officer wearing clean plastic gloves, one m2 of the tank surface is washed with cotton wool and hydrocarbon-free methanol. The methanol is then squeezed from the cotton wool into a Nessler tube until there is 15 cm3; then 45 cm3 of distilled water are added. The mixture is shaken and allowed to stand for 20 min. The contents are then compared with a similar Nessler tube filled with 60 cm3 of distilled water. If the sample tube shows a cloudy or not completely clear liquid, there are still hydrocarbons on the tank wall. In this instance further cleaning is required using a hydrocarbon remover (weak alkaline solution) before another test is made.The principle on which this test is based is that the hydrocarbons are soluble in methanol but not in water, and any hydrocarbon contamination is shown up as a cloudiness. This visual test can only be used to detect hydrocarbons down to 5 p.p.m. in the final solution. Chloride Test Again with the Officer wearing clean plastic gloves, 1 m2 of the tank wall is washed using cotton wool and distilled water. The solution is transferred to a Nessler tube via a funnel and filter-paper. The tube is next topped up with distilled water to 100 cm3, then 5 drops of 1% silver nitrate solution are added and the contents shaken thoroughly. The mixture is compared with another Nessler tube filled with 99474 PORTABLE ANALYTICAL CHEMISTRY Anal.Proc., Vol. 21 cm3 of distilled water, 1 cm3 of standard 1000 p.p.m. chloride solution and 5 drops of silver nitrate solution. If the turbidity of the test solution is greater than that of the standard solution then the chloride content is higher than 10 p.p.m. The reaction is as follows: Ag+ + C1- + AgCl The limit of detection by this method is about 10 p.p.m. of chloride. Both the hydrocarbon and chloride tests give levels of detection way above the cleanliness specification of less than 5 p.p.m. and less than 1 p.p.m., respectively. In practice it is a relatively easy matter to meet the hydrocarbon requirements. The chloride specification is, however, a different matter. In the Mississippi river during summer the moisture laden atmosphere contains high levels of chloride and any venting of the tank increases the chloride level.It is normal practice in the trade to drench the bulkheads with hydrocarbon- and chloride-free methanol prior to commencement of loading and then educt the residues. Loading is then commenced until there is approximately 0.3 m in the tank. Samples are then taken, and analysed at a laboratory. If the sample fails, the ship has to clean tanks again. The ship may be allowed to stay alongside and clean, or, if the berth is required for another vessel, sent to sea to clean. The running cost for a vessel this size is approximately $6000 per day excluding loss of earnings, and port, tugs and pilot costs are high in any port in the world.If when the foot samples are analysed, the chloride and hydrocarbon levels are found to be borderline, some operators take a gamble and take a further amount of methanol working on the dilution principle. For example, the first foot covers the total area of the tank bottom and 0.3 m X the perimeter of the tank. The further amount only covers 0.3 m x perimeter of the tank. It is, however, a gamble, and does not always work out, in which case the operator takes responsibility for the contaminated methanol, on some occasions as much as 180 tons, and still has to re-clean his tanks. This “bucket chemistry” seems a very crude method of determining cleanliness, especially when there are such high costs involved. In the worse instance these could be: $ Additional cleaning (fuel) 2 400 180 tons of methanol 201 600 7 000 12 000 Total $223 000 Port costs (additional moves) Loss of hire (24 h) The Challenge There is an obvious dichotomy: on the one hand, a rather sophisticated, well engineered, vastly expensive ship operating under stringent financial and operational conditions; on the other, analytical techniques to support the operations which are crude, simplistic and open to misinterpretation by the necessarily unskilled operators who have to carry out the tests.The reason for the latter problem originates from the need to keep manning levels down and to have only essential marine personnel on board who are not trained chemists. The marine operator therefore requires techniques and equipment to carry out these, really quite difficult, measurements of contamination and toxicity in a simple way to produce true and unequivocal answers to his analytical questions.In view of the wide range of cargoes and operating conditions experienced on ships, this is a very demanding request, and one which will be difficult to satisfy within the constraints of commercial operations. Continuous Monitoring of Rivers and Estuaries M. H. 1. Comber and P. J. D. Nicholson Brixham Laboratory, ZCZ plc, Freshwater Quarry, Brixham, Devon With increasing legislation aimed at controlling the discharge of industrial effluents during the 1950s and 1960s, the Brixham laboratory became involved in investigating the behaviour of chemicals, and their interactions with biota, in rivers and estuaries.The Control of Pollution Act, 1974, and the recent specification of Environmental Quality Objectives by the EEC, have increased the need for knowledge concerning the environmental distribution and behaviour of chemicals. The usual methods for monitoring rivers and estuaries depend on taking spot or composite samples,December, 1984 PORTABLE ANALYTICAL CHEMISTRY 475 followed by laboratory analyses. These methods are fraught with difficulties due to stability and contamination problems, and additionally suffer from the difficulty of trying to extrapolate continuous trends from discontinuous data. In order to overcome such problems, a portable continuous monitor, based on ion-selective electrodes, ISEs, has been developed. The system to be described was initially a simple momtor with a pH electrode.In response to specific requests, the monitor has been rebuilt at different times, as cyanide and ammonia ISEs have been added, followed by a temperature probe, a sulphide ISE, and recently, dissolved oxygen and salinity probes. The present system, Fig. 1, is thus capable of continuously monitoring seven parameters. The sample intake can either be mounted on a metal pole fixed to the bow of a vessel, approximately one metre below the surface, for use in horizontal profiling of a river course, or it can be lowered through the water column for depth profiling. The dissolved oxygen and salinity probes are mounted alongside the sample intake, and are calibrated in the laboratory. The water is pumped up on to the boat at a rate of approximately 500 gallon h-1, using an on-board pump, Pz, Fig. 1.The power for this pump and the rest of the equipment is derived from the vessel’s accumulators, either directly or via an electronic inverter to provide 240 V at 50 Hz. Fig. 1. sulphide; 3, = valve; P Continuous monitor flow diagram. Electrodes are: 1, cyanide; 2, ammonia; 4, pH; 5 , reference; 6, salinity; 7, oxygen; 8, temperature. V = pump; b = sample flow; D = waste flow. The sample is then transferred, via the jacket surrounding the electrode box, to an interceptor which allows a constant flow to be drawn from it, via the Watson-Marlow 501 peristaltic pump, PI, Fig. 1. The interceptor additionally separates the sample from entrained gases. By first pumping the sample through this jacket, the electrodes are held at the same temperature as the incoming water. In practice, this has led to stable readings and has eliminated the need for recalibration due to temperature changes (see Fig.2, channel 7). From the interceptor the sample is withdrawn and conditioned prior to being presented to the ISEs in the electrode housing box. This sample conditioning involves the addition of an EDTA - ascorbic acid reagent followed by sodium hydroxide, raising the pH of the sample stream to a pH > 12. The ascorbic acid is added as an antioxidant, being preferentially oxidised before any sulphide ion that may be present. The high pH ensures that optimum pH for the performance of the ammonia, sulphide and cyanide ISEs. Finally, the EDTA, added prior to the addition of the sodium hydroxide, helps prevent the precipitation of magnesium and calcium salts at high pHs.476 PORTABLE ANALYTICAL CHEMISTRY Anal.Proc., Vol. 21 The pH electrode, although mounted with the other electrodes in the electrode housing block, has a separate supply from the interceptor of approximately 150-200 ml min- 1 . This electrode is calibrated on site with two buffer solutions. The voltage outputs from the probes are treated by high impedance followers with unity gain, the amplifiers being earthed to minimise "spike" measurements and other interferences. After being buffered the signals are stored on a data logger, which can also display the signals being stored. These signals can subsequently be plotted or analysed by a laboratory based computer (Digital Systems PDP 11/34).As part of the initial evaluation, an exercise was undertaken in the lower part of the Dart estuary. The apparatus was assembled on a 30 foot stern trawler, and was then calibrated during the journey from Brixham Harbour to the Dart. The calibration procedure uses high concentration stabilised standards, subsequently diluted with water obtained during the journey to the start of the exercise. On the printouts, Fig. 2, these areas are marked A and B corresponding to the two standards used. N CH1 Salinity - : -3 ' u t " l l ' " l ' I ' ' I ' I ' I CH2 Sulphide $ 1q1 CH3 Cyanide CH5 pH E -2:' ' ' ' ' ' ' ' ' ' ' ' I ' ' ' ' I 60 - r? CH6 Diss. oxygen 20 ' ' ' ' ' ' ' ' I I I ' ' I I ' ' I I" I 'I - 284 n CH7 Temperature --- 11.58 12.38 13.10 13.99 14.78 15.50 16-38 11.98 12.78 13.50 14.38 15.10 15.90 Time (BST) Fig.2. Continuous monitor traces. The traces obtained, Fig. 2, are complex and require a considerable amount of interpretation. However, some preliminary analysis of the data has demonstrated very good agreement with a laboratory analysis of samples taken during the run (Table I). This portable continuous monitor is able to produce data that can be used to profile a river chemically, and so could generate a large data base on these seven parameters for the rivers and estuaries which are of interest to the Laboratory. The monitor is, however, being further developed in order to extend its usefulness. The extra channels to be added are: firstly, trace metals by anodic stripping voltammetry and medium exchangel; secondly, nitrate and nitrite analysis, utilising recent improvements to the flow injection analysis techniques2.3; and thirdly, an attempt to start on-line investigations of the organics present, using a fast scanning monochromator. These improvements will be carried out, in conjunction with improvementsDecember, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS TABLE I 477 COMPARISON OF RESULTS-PH AND SALINITY Parameter (pH) Salinity, parts per thousand ,- Sample No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Monitor results 8.2 8.2 8.1 8.1 8.1 8.1 8.1 8.1 8.1 8.1 8.1 8.0 7.9 8.0 8.0 Laboratory results 8.1 8.1 8.0 8.0 8.2 8.2 8.2 8.2 8.2 8.2 8.1 8.0 8.1 8.0 8.0 Monitor results 34.8 34.7 34.8 34.8 34.8 34.2 34.1 34.0 34.0 33.9 33.7 33.6 33.7 33.6 33.6 Laboratory results 34.8 34.7 34.8 34.8 34.9 34.3 34.2 34.1 34.3 34.2 34.2 33.6 33.5 33.6 33.6 to the monitor’s design, including changes to the sample intake and the sample acquisition pump to remove metal surfaces. Finally, the authors would like to record acknowledgement of the work of Dr. F. J. Whitby and Mr. D. R. Cottrell, both of whom were original designers of the system, together with one of the authors, and were responsible for progress of the system up to 1982. References I . Wang, J . , and Greene. B., Wat. Res., 1983, 17, 635. 2. Fogg, A. G., and Bsebsu, N. K., Analyst, 1984, 109, 19. 3. Fogg, A. G., Chamsi, A. Y., and Abdalla, M. A,. Analyst, 1983, 108, 464.
ISSN:0144-557X
DOI:10.1039/AP9842100469
出版商:RSC
年代:1984
数据来源: RSC
|
6. |
New methods for the quality control of Foods and Natural Products |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 477-482
S. Ramm,
Preview
|
PDF (1814KB)
|
|
摘要:
December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 477 New Methods for the Quality Control of Foods and Natural Products The following are summaries of ten of the papers presented at a Joint Meeting of the East Anglia Region, the Automatic Methods Group and the Eastern Branch of the Institute of Food Science and Technology held on April 2nd and 3rd, 1984, at New Hall, Cambridge. The Use of Mid-infrared Spectroscopy in Dairy Products S. Ramm Multispec Ltd., Wheldruke, York, YO4 6NA For a long time, the quantitative analysis of milk for butterfat and protein content has been carried out for producer payment schemes and for the purposes of herd improvement services. Traditionally, these analyses were carried out by using chemical methods, which were fairly lengthy and costly in manpower and reagents.In the early 1960s there was a large increase in the demand for milk testing, which led instrument designers to begin searching for more rapid techniques of analysis. It became obvious that the chemical methods did not readily lend themselves to automation, and of the several purely physical methods investigated, infrared absorption spectroscopy has proved to be the most successful. Technical Difficulties These arise mainly from two sources, the nature of the sample and the lack of convenient detectors at the probe wavelengths. Two problems associated with milk are that the water absorbs more than 80% of the incident radiation, and that there is a large amount of light scattering from the larger fat globules. This second problem is overcome by heating to soften the fat.and then homogenisation to break the drops into smaller fragments that do not cause this scattering effect.478 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 For the NIR and the visible region, silicon and germanium photon detectors are readily available, but not so in the mid-infrared. Nowadays, pyroelectric detectors are available which have the advantage of good sensitivity from 2 to several hundred pm, depending on the window material used. The disadvantages are a slow response (0.01-0.1 s), and that only changes in radiation are detected. Therefore, we oscillate between our probe beam and a stable reference, at a frequency of 10-100 Hz. The first milk analysers employed a reference cell filled with water, identical to the one containing the milk sample.This solved two problems; a stable reference was obtained, and the absorption due to water was almost removed for the measured signal. However, it was difficult to make identical cells and maintain them at identical temperatures. The advent of narrow-band multi-layer dielectric interference filters enabled a further technique, double-beam-in-wavelength, to be explored, and it is this technique that is used so successfully nowadays. A Double-beam-in-wavelength System to Analyse Milk In a double-beam-in-wavelength instrument, there is one sample cell, through which two beams of differing wavelength are passed alternately. The wavelengths are chosen so that water absorbs the radiation equally at both wavelengths and the component of interest absorbs strongly at one (the sample wavelength) but not at the other (the reference wavelength).In order to measure the signal, the reference beam is attenuated by means of a linear comb until both sample and reference beams are equal. The position of this comb is then proportional to the ratio of the transmission in the sample beam to the transmission in the reference beam. Assuming that there is some offset on our reading of the attenuator position, which represents the position of zero transmission, we can convert this reading to an absorptivity that is proportional to the concentration of the component of interest by using Beer’s law: (where T, is the position of the comb). A = -log, [K( T, + c)] This equation is usually re-ordered as: A = k - log, (T, + C) The constant c is known as the linearity correction offset, because if it is not applied correctly the result, A , is not linear with increasing concentration, and is most easily found by calibration with a set of linear solutions.The offset k is then applied to make the result zero in the absence of the component of interest. When measuring any one component, for instance fat, the presence of other components, e.g., protein and lactose, causes interference mainly due to the water displacement effect, and the fat data requires correction. This is achieved by combining with the result for fat a proportion of the results for protein and lactose. This same argument applies to all components, and the resulting set of equations is known collectively as the milk equation.The use of computers has greatly eased the problems of calibrating the instrument from a large number of chemically analysed milk samples, as well as that of establishing the value of the linearity correction offset. In applying the technique to dairy products other than milk, it was found that more complex products, e . g . , ice-cream (with added vegetable fats and other sources of sugars), required their own individual calibrations for optimum results. Thus, the milk equation becomes a generalised set of polynomials of first order applied to the initial results of the instrument, the coefficients of which are found by using multiple regression techniques upon a large number of calibration samples of the product of interest.Using computers linked to the instrument provides a means whereby users can apply calibrations for a wide variety of products chosen from either a library of standard calibrations stored within the computer, or those calibrations which they have developed themselves. Another important use of the computer is in tying together several pieces of equipment. For example, in order to analyse successfully products with a very high fat content, they must first be diluted. This must, of course, be carried out accurately and a computer linked to both the milk analyser and a balance provides a convenient means. The computer records the exact amount of diluent added, and then corrects the result of the analysis accordingly. Computation Within the Instrument The obvious next step is to put the computer inside the instrument, to perform a number of functions.December, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 479 Instrument control The use of a computer, or a distributed microprocessor system, to control the actual working of the instrument provides a very flexible and resilient system.Many features of the instrument operation and measurement system can be changed dynamically, by simple user interaction with the front panel. Additionally, extensive fault monitoring and diagnostic tests are possible, as is automatic re-testing of a sample if there is some reason to suspect a bad measurement. A degree of fault tolerance may be incorporated, with the instrument working at reduced speed or quoted accuracy in the event of a minor failure such as instability of temperature. This is very important in a control laboratory, where an instrument working at one tenth of the speed until it can be repaired is better than none at all, or one with an unnoticed fault giving the wrong results.Data capture Conventionally, offsets, logarithms and slopes are applied electronically to the measured signal. Drift, instability and component tolerances degrade the signal until it is finally converted to digital form via an ADC. By converting before the signal is processed at all, we avoid these pitfalls. It is necessary, however, to convert to a greater precision if the conversion is done before the logarithm is taken, in order to maintain the precision of the result over the entire scale.Nowadays this can easily be carried out, and the measurement further enhanced, by using digital signal processing techniques. The signal may now have offset and logarithm applied digitally, without danger of drift or instability. Data processing and storage Now that the data is available, a calibration from a library stored within the instrument can be selected and applied to the data as it is received. The user can even compare results by using different calibrations at the touch of a button. There is no reason to stop there. The instrument could be made to calculate its own linearity correction offset, then to calibrate itself by presenting it with a number of samples and entering the results of chemical analyses via a keyboard. Data might be stored automatically on a diskette for dispatch, or the instrument linked directly to the milk records.Calibrations need not be stored within an instrument, but a number of instruments could share a common data base. Automatic re-testing can be carried out if, knowing the source of a sample, the results are found to be uncharacteristic compared with previous records. Interaction with other equipment NOW that the computer is part of the instrument, it becomes so much easier to arrange an interface with other equipment. In this way the instrument can control, or be controlled itself by, many other devices. In addition it is possible to extend the computational power by adding other processors on a network basis, either locally or remotely by modem. Thus, it is possible to transmit new calibrations to an instrument over a telephone line.Such an instrument would be invaluable as part of an automated process line. Conclusion Mid-infrared spectroscopy has shown itself to be very successful and versatile in the dairy industry, and yet the instrumentation is still in its infancy. As a new generation of computerised instruments begins to emerge, and the areas of application extend, it looks towards a long and exciting future. There are still many avenues to explore, at every level of the technique, and will be many years, if ever, before ideas are exhausted. Characterisation of Cereal Varieties by Electrophoresis of Endosperm Proteins D. 6. Smith Chemistry Department, Plant Breeding Institute, Cambridge, CB2 2LQ The electrophoretic separation of cereal endosperm proteins is now attracting considerable attention from plant breeders, seed registration and testing authorities, the seed trade and seed processors and researchers interested in cereal quality.Although endosperm proteins provide a source of nitrogen for the germinating embryo, there seems little selection pressure to maintain precise structural conformity in these proteins, hence variation is widespread. These variations are clearly visible as characteristic480 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 banding patterns after separation of the endosperm proteins by electrophoresis, and as the proteins present are genetically determined these banding patterns are characteristic of a variety. Conse- quently, electrophoretic band patterns are very useful to seed testing and trading organisations for varietal identification, for defining distinctive features of a variety and for assessing genetic uniformity of a variety.Electrophoresis on its own will not fulfill all these functions, but because band patterns are largely independent of growing environment, the technique is becoming a useful component of the tests used by these organisations. A second consequence of the observed variation is that it can be readily exploited by research workers for genetic analyses and, perhaps more important, for assessing the effect of particular proteins, or groups of proteins, on processing quality. This paper briefly reviews the applications for wheat and barley, the most important of the cereal species in Europe.Classes of Proteins Cereal grain proteins have traditionally been classified on the basis of differential solubility1 into albumins (soluble in water), globulins (soluble in salt solution), prolamins (soluble in alcohol) and glutenins (soluble in dilute acid or alkali). The albumin and globulin fractions differ little among varieties of barley and wheat and are of little value for varietal characterisation, although specific staining for some water soluble isozymes has proved useful.2 The prolamin fraction, termed gliadin in wheat and hordein in barley, is of greatest value for varietal identification. It is easily extracted, its genetic control is well understood and the polypeptides in this fraction exhibit considerable cultivar variation.3-5 The glutenin fraction of wheat protein also exhibits considerable variation and subunits within this fraction are associated with breadmaking quality.6 However, the corresponding group of barley proteins, the D hordein group, differ little amongst varieties although the amount present in barleys had been associated with malting quality.’ As well as using particular fractions, all endosperm proteins, extracted with sodium dodecyl sulphate (SDS) and reduced with mercaptoethanol, have been assessed for varietal characterisation after separation by electrophoresis.”.” Figs.1 and 2 show the electrophoretic separations of barley and wheat endosperm proteins, respectively, to illustrate the various classes of proteins. Electrophoretic Techniques Several electrophoretic techniques may be employed to separate endosperm proteins.Starch gel electrophoresis of gliadin proteins has been used for over 20 years1() and in the UK an identification Fig. 1 . Separation of total endosperm proteins from single seeds of different genotypes of barley. The major classes of proteins are indicated after separation by SDS - PAGE.December, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 481 High M, glutenin Mainly w-g I iad i n Mainly low M, glutenin Mainly a-, 0- and y- gliadins fi. Fig. 2 . As Fig. 1, for wheat. system based on separation by starch gel electrophoresis is used by the National Institute of Agricultural Botany.11 However, separation in polyacrylamide is more commonly used for both wheat and barley proteins and several techniques involving polyacrylamide gel electrophoresis (PAGE) are available.These include separations in homogeneous or gradient gels, and fractionation may be on the basis of net charge and molecular mass (e.g., lactate - PAGE) or by relative molecular mass only (e.g., SDS - PAGE). Other techniques that may be used are isoelectric focusing and two-dimensional separations, but these are not well suited to routine use because of cost and complexity. Several descriptions of these techniques are available (e.g., references 12 and 13). Wheat Bread wheat exhibits considerably more variation in band patterns than barley because (i) major storage protein genes in wheat are located on two groups of chromosomes (groups 1 and 6) and (ii) as wheat is hexaploid, variations amongst the homoeologous chromosome groups 1 and 6 occur, allowing over-all variation on six chromosomes in each variety. As a consequence, almost all wheat varieties exhibit their own characteristic band pattern.Several methods have been proposed for exploiting this for varietal identification, and perhaps the most widely used is the separation of the gliadin fraction using lactate - PAGE.12 A coding system has been devised to describe the band patterns of gliadin proteins following separation in this way and registration authorities in some countries include this code in descriptions of new varieties.12 In recent years, research workers have established that considerable variation exists in the glutenin groups of wheat endosperm proteins. Of particular interest is that variations in the high relative molecular mass (M,) class of glutenin (Fig.1) are related to differences in breadmaking quality. Consequently, work has concentrated on establishing the relative contributions of individual glutenin subunits to quality. Exploitation of aneuploid and isogenic lines of wheat has been particularly useful for these genetic analyses. SDS - PAGE is used to recognise the glutenin subunit combinations in this work, and when desirable combinations are identified, the technique is used by wheat breeders to select for these combinations.6 Barley The prolamin fraction of barley endosperm protein can be separated into three distinct groups of subunits by SDS - PAGE (Fig. 2). Each of these groups, termed hordein B, C and D, is controlled by a482 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal.Proc., Vol. 21 multigenic locus, and these three loci (hor 2 , l and 3, respectively) have been mapped on chromosome 5.5 Although very little variation exists at hor 3, about eight variants of the C group and 24 variants of the B groups have been shown to exist by electrophoresis. 13 Further variation may be resolved by using total protein extracts (Fig. 2) or two-dimensional separation.14J5 Although this considerable variation exists, and electrophoresis is of value for recognising or characterising varieties of barley, it is less discriminating than gliadin fractionation is for wheat. Two additional limitations exist for barley; firstly, the concentration of the hordein genes on one chromosome makes electrophoresis less useful for testing for genetic uniformity; and secondly, several barley cultivars now in extensive use exhibit more than one banding pattern, showing them to be mixtures of more than one genotype.This situation has arisen because no assessment of uniformity in this character has been made in the past. However, as endosperm protein band patterns are likely to be utilised to help to identify varieties in the future, some breeders are now taking steps to ensure that their new varieties have a single, stable band pattern. Attempts have been made to link hordein band patterns with malting quality, but no definite relationships have been established.8J6J7 However, there is some evidence that hordein D may be associated with malting quality when total protein concentration exceeds about 9.5% .7 In summary, the use of electrophoresis for the separation of endosperm proteins is proving to be a useful tool in identifying and assessing the genetic uniformity of cereal varieties, and the technique is especially useful for wheat as all varieties exhibit distinctive band patterns.Electrophoresis is also a powerful tool for studying the genetics of storage protein and relating these to processing quality. Again, this has been especially useful with wheat, where the technique is now used to screen breeding material for the desirable combinations of glutenin subunits which relate to good baking quality. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. References Osborne, T. B . , J. Am. Chem. SOC., 1895, 17, 539. Almgard, G . , and Clapham, D., Swed. J. Agric. Res., 1977,7, 137. Payne, P. I., Holt, L. M., Lawrence, G. J . , and Law, C. N., Qual. Plant. Plant Foods Hum. Nutr., 1982,31, Wrigley, C. W., and Shepherd, K. W . , Ann. N.Y. Acad. Sci., 1973, 209, 154. Shewry, P. R., and Miflin, B. J., Qual. Plant. Plant Foods Hum. Nutr., 1982,31, 251. Payne, P. I . , Corfield, K. G . , and Blackman, J. A . , Theor. Appl. Genet., 1979, 55, 153. Smith, D. B . , and Lister, P. R., J. Cereal Sci., 1983, 1, 229. Smith, D. B . , and Simpson, P. A . , J . Cereal Sci., 1983, 1, 185. Montembault, A . , Autran, J. C . , and Joudrier, P., J . Znst. Brew., 1983, 89, 299. Bourdet, A . , Feiller, P., and Mettavant, F., C.R. Acad. Sci., 1963, 256, 4517. Clydesdale, A., Draper, S. R., and Craig, E. A., J . Natl. Znst. Agric. Bot., 1980, 16, 61. Wrigley, C. W., Autran, J . C., and Bushuk, W., Adv. Cereal Sci. Technol., 1982, 5, 211. Shewry, P. R., Ellis, J. R. S., Pratt, H. M., and Miflin, B. J., J. Sci. Food Agric., 1978, 29, 433. Smith, D. B . , and Payne, P. I . , J. Natl. Znst. Agric. Bot., in the press. Shewry, P. R., Pratt, H. M., Charlton, M. J . , and Miflin, B. J., J. Exp. Bot., 1977,28, 597. Baxter, E. D., and Wainwright, T., Proc. Am. SOC. Brew. Chem., 1979,37, 8. Shewry, P. R., Faulkes, A. J . , Partner, S . , and Miflin, B. J . , J. Znst. Brew., 1980, 86, 138. 229.
ISSN:0144-557X
DOI:10.1039/AP9842100477
出版商:RSC
年代:1984
数据来源: RSC
|
7. |
Chromatographic analysis of glucosinolates—a potential aid to the plant breeder |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 482-493
R. K. Heaney,
Preview
|
PDF (1514KB)
|
|
摘要:
482 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 Chromatographic Analysis of Glucosinolates-a Potential Aid to the Plant Breeder R. K. Heaney AFRC, Food Research Institute, Colney Lane, Norwich, NR4 7UA Glucosinolates are a class of sulphur containing glycosides found throughout the Brussica species and are of considerable interest to the breeder and to the processor because of the physiologically active nature of their enzymically-produced breakdown products. Although almost 100 glucosinolates are now known (differing only in the nature of the side-chain R), far fewer (perhaps 12-20) are of significant occurrence in brassicas, often with only 4-8 major compounds present in any one species.'December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 483 The Effects of Glucosinolates in Foods, Plants and Feedingstuffs Glucosinolates and a thioglucosidase (myrosinase) are separately located in the plant tissue and any form of tissue disruption (cutting, chewing, insect damage, etc.) brings about the reaction outlined in Fig.1. In fact, the products of such autolysis are more complex and are dependent upon a number of factors, such as pH, the nature of the R group and the presence or absence of certain co-factors. H20 R-C \\ NOS03- Thioglucosidase o-Glucose lsothiocyanate Nitrile Thiocyanate Fig. 1. Breakdown products of glucosinolates. Although isothiocyanates are responsible for the desirable flavour and aroma of brassica vegetables, this same property is the main reason for palatability problems when animals are fed rapeseed meals containing high levels of glucosinolates.Nitriles, which are produced under acidic conditions, have been shown to be toxic to rats when included in the ration at a rate of 0.2%.2 The presence in the glucosinolate side chain of a 2-hydroxy group leads to the formation of oxazolidine-2-thiones, which are potent goitrogens. In addition, glucosinolate hydrolysis products have been shown to possess anti-fungal, insect attractant, insect repellant, plant growth stimulating and other physiological properties. 1 Concern about the problems of palatability of rapeseed meal has led to breeding programmes resulting in a dramatic reduction in the levels of glucosinolates in rapeseed. In addition, it has recently been shown3 that 2-hydroxybut-3-enyl glucosinolate (progoitrin) is of greater potential importance in the aetiology of goitre in animals grazing forage rape than is the thiocyanate ion, a product of indolyl glucosinolates.Hence, a selective reduction in the levels of this compound might be the goal of forage brassica breeders. For these and other reasons it is clearly important that reliable methods are available for the analysis of total and individual glucosinolates. Analysis of Glucosinolates The analyst new to the field might find the choice of a suitable method for the determination of glucosinolate content somewhat bewildering.4 Over the years many methods have been published, some of which (although obsolete and inaccurate) are still used today for reasons of speed, cheapness or the availability of suitable instrumentation.All early quantitative methods depended upon the initial hydrolysis of the glucosinolates with myrosinase, followed by the estimation of one or more of the484 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vul. 21 enzymic breakdown products. Autolysis, that is hydrolysis by the endogenous enzyme system in the plant, is best avoided owing to the multiplicity of aglucone fragments that might result. Inactivation of the plant enzymes followed by the addition of myrosinase under controlled conditions gives predictable breakdown products. All myrosinase hydrolyses, however, result in the release of stoicheiometric amounts of glucose and sulphate. Although the release of sulphate and its subsequent gravimetric or titrimetric determination is little used today, the determination of released glucose forms the basis for a number of sensitive methods for total glucosinolates.In leaf material the prior removal of endogenous free glucose is important and is best achieved by adsorption of the glucosinolates on to a suitable anion-exchange medium, thus allowing the removal of neutral sugars by washing. Addition of myrosinase to the ion exchange bound glucosinolates results in the release of glucose,S which can then be determined by a number of different methods. Methods for determining total glucosinolates, however, reveal nothing of the nature of the individual glucosinolates present, and while gas chromatography (GLC) of the volatile products6 of myrosinase hydrolysis affords a partial solution to this problem it fails to quantify the indole glucosinolates.These compounds must be assayed separately by determination of released thiocyanate ion, usually using a spectrometric method based on complex formation with iron(II1) ions. Adsorption of glucosinolates on to DEAE Sephadex A25, followed by desorption with potassium sulphate or dilute pyridine acetate, with subsequent derivatisation and gas chromatography as trimethylsilyl ethers, met with only limited success due to the tendency of the sulphate moiety to split off, giving erratic results. In addition, this approach still failed to measure the indole glucosinolates. The identification of an enzyme8 capable of mediating the quantitative desulphation of glucosinolates represented a major advance.Desulphoglucosinolates are relatively stable and much more amenable to GLC and HPLC and prior desulphation forms the basis for current methods for the analysis of individual glucosinolates. In a highly specific clean-up step, glucosinolates are first absorbed on to Sephadex A25, washed free of non-anionic material and then desulphated in situ. 10 The uncharged desulphoglucosinolates are then eluted with water. After derivatisation, the desulphoglucosinolates, including three indolyl compounds, may be separated by use of temperature programmed GLC.9 This method, however, fails to separate the recently'" reported 4-hydroxyindole glucosinolate, which is present in relatively large amounts in some cultivars of low glucosinolate rapeseed. It has been demonstrated" that no such problem exists when desulphoglucosinolates are separated by HPLC using a reversed phase (Spherisorb ODS 2) column with gradient elution.All glucosinolates of common occurrence in brassica vegetables, forages, oilseeds and condiments are amenable to this technique, which has the added advantage that no derivatisation step is needed. Quantitative methods (GLC and HPLC) depend for their accuracy on the availability of suitable standards for the determination of response factors. Although few glucosinolates are available commercially, methods have been described12 for the isolation of others. Where such standards are available a good correlation has been found between GLC and HPLC. Methods are therefore available which enable the plant breeder or the processor to monitor levels of total or individual glucosinolates.Such methods could be used in the selective reduction of progoitrin levels in vegetable crops whilst maintaining the levels of flavour precursors. A total reduction of glucosinolate content (as in the instance of rapeseed) may be the aim of forage brassica breeders but the consequences of such action on the plants' resistance to diseases and pests are uncertain. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. References Fenwick, G. R., Heaney, R. K., and Mullin, W. J . , CRC Crit. Rev. Food Sci. Nutr., 1983, 18, 123. VanEtten, C. H., Gagne, W. E., Robbins, D. J.. Booth, A. N., Daxenbichler. M. E., and Wolff, I. A., Bradshaw, J. E., Heaney, R. K., Macfarlane Smith, W. H., Gowers, S., and Fenwick, G.R., J. Sci. Food McGregor, D. I., Mullin, W. J., and Fenwick, G. R., J. Assoc. Off. Anal. Chem., 1983, 66, 825. Heaney, R. K., and Fenwick, G. R., 2. Pjlanzenzuecht., 1981,87, 89. Youngs, C. G., and Wetter, L. R., J . Am. Oil Chem. SOC., 1967,44, 551. Thies, W., Fette, Seifen, Ansrrichm., 1976, 78, 231. Thies, W., Naturwissenschaften, 1979, 66, 364. Heaney, R. K., and Fenwick, G. R., J. Sci. Food Agric., 1980,31, 593. Truscott, R. J. W., Burke, D., and Minchinton, I. R., Biochem. Biophys. Res. Comm., 1982, 107, 1258. Spinks, E. A., Sones, K., and Fenwick, G. R., Fette, Seifen, Anstrichm., 1984, 86, 228. Hanley, A. B,. Heaney, R. K., and Fenwick, G. R., J. Sci. Food Agric., 1983,34,869. Cereal Chem., 1969,46, 145. Agric., 1984, 35, 977.December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS History of and Future Prospects for Near-infrared Analysis W.F. McClure North Carolina State University, Raleigh, NC 27695. USA 485 The basic principles behind infrared analysis have been known for over 100 years and the potential of infrared as an analytical tool has been recognised for at least 70 years. Infrared instruments were first introduced in the 1930s and the early work with infrared was for theoretical research on molecular structure. World War I1 encouraged further developments of infrared as an analytical tool for the purpose of studying lubricants and rubber. In the early 1940s the first commercial infrared instrument became available, and in 1947 the first infrared double-beam instrument was sold.' Infrared includes a region reaching from 0.7 to 3.00 pm.Usually this broad range is divided into three parts: the near-infrared (NIR) (0.7-2.5 pm), the mid-infrared (2.5-15 pm) and the far-infrared (15-200 pm). Infrared research for the last 50 years has been centred in and around the mid-infrared, and with the advent of Fourier transform infrared analysis (FRIT), infrared has become one of the most widely used technologies in a variety of problems involving organic compounds. Today commercial instruments are designed to utilise that portion of the spectrum which extends from 2.5 to 50 pm, although the 2.5-15 pm region is by far the most commonly used. Classical spectroscopists have traditionally avoided the NIR region. Working with the fundamental absorption bands, most of which fall in the mid-IR range, researchers were convinced that the overtones and combination absorptions which occurred in the NIR were of little consequence. Thus, for years NIR lay like a sleeping giant.The development of manufacturing skills for fabricating photon detectors, primarily lead sulphide, revived interest in NIR. NIR instrument technology was tacked on to ultraviolet - visible (UV - VIS) technology, which still set it apart from the main stream of IR research. In the 1950s, Karl Norris of the USDA laboratories began to investigate the NIR properties of dense light-scattering materials. His earlier work dealt with the development of instrumentation for studying NIR properties of intact biological materials.2-6 By 1975 NIR research centres that had computerised scanning NIR spectrophotometers included the USDA at Beltsville, MD, North Carolina State University at Raleigh, NC, Russell Research Centre at Athens, GA, and Pennsylvania State University, at College Station, PA.Two manufacturers of NIR equipment came on the scene in the mid 1970s: Neotec Inc. of Silver Springs, MD, and Technicon Industrial Systems of Tarrytown, NY. Today both companies offer scanning equipment. The thrust of NIR research has been to demonstrate its potential for rapidly measuring certain chemical constituents in pulverised samples that are chemically complex. The procedure has been to use a set of samples (approximately 100 or more) to train (or calibrate) the computerised spectrophotometer to recognise (measure) the level of the constituents under study.Once this has been done, the calibration equation can be used to estimate the constituent in other samples.7-11 Until recently, measurements for NIR analyses were made in the wavelength domain. McClure and co-workers12.13 have shown that Fourier analysis of NIR spectra can be used with several advantages. They have shown that as few as 50 Fourier coefficients can be used to recall essentially all of the information in many NIR spectra. Further, they have shown that only the first 11 Fourier coefficients are needed to estimate chemical composition. Working in the Fourier domain has several advantages over working in the wavelength domain: (a) as few as eleven numbers (coefficients) are needed to estimate chemical composition; (b) smoothing of spectra from the Fourier domain can be achieved without distortion and loss of end-points; (c) deletion of the first Fourier coefficient (the mean term) from the calibration procedure intrinsically corrects for particle size; (d) magnetic storage requirements for spectral data can be reduced by 97% (compared with a 1700-point NIR spectrum in the wavelength domain); (e) the calibration time can be cut by 97%; (f) calibration maintenance drudgery is reduced (since one no longer has to search for the best wavelength; instead one uses only the first 11 Fourier coefficients); (g) breaks up inter-correlation between wavelengths when transformation is made to the Fourier domain; (h) derivative spectra can be generated from the Fourier domain without distortion and loss of end-points; (i) the power spectrum, available in the Fourier domain, can be used to detect excessive instrument noise and anomalies; and (j) an interferometer, with its advantages, can be used to generate the Fourier coefficients directly.The prospects of NIR for the future are bright. Applications for on-line analysis of solids, liquids and gases will demand faster response times. Fourier NIR instrumentation, with its advantages of high optical throughput, high signal to noise ratio and fast response, will fulfil many on-line needs.486 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 The growing demand for handling larger data bases (spectra files) will increase. Therefore, larger (16- and 32-bit word size with megabits of memory) and faster (having numerical and array processors) computers will become commonplace.Digital cameras capable of integrating information from a wide field of view will be capable of evaluating size, shape and surface defects as well as chemical composition of products on-line. As instrumentation improves (signal to noise ratio, resolution, etc.) NIR with its high speed of response will take over many applications that now are routinely performed by mid-IR methods. In the next few years the cloud of uncertainty surrounding data treatment (i. e., wavelength selection) will disappear as chemists and statisticians blend their expertise in search for “the truth” in NIR analysis. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. References Archer, E. D., Lubrication, 1969, 55 (2), 13.Norris, K. H., “Instrumentation of Infrared Radiation,” ASAE Paper No. 60-815, American Society of Norris, K. H . , and Butler, W. L . , IRE Trans. Biomed. Electron., 1961, 8, 153. Norris, K. H., Trans. A m . SOC. Agric. Eng., 1964, 7, 240. Ben-Gera, I . , and Norris, K. H., J . Food Sci., 1968, 33, 64. Bittner, D. R., and Norris, K. H., Trans. Am. SOC. Agric. Eng., 1968, 11, 534. Hamid, A., and McClure, W. F., “Software for an On-line Computerised Spectrophotometer,” Tech. Bull. No. 252, North Carolina Agricultural Research Service, Raleigh, NC, 1978. Hamid, A., McClure, W. F., and Weeks, W. W., Beitr. Tabakforsch., 1978, 9, 267. McClure, W. F., and Hamid, A , , Am. Lab., 1980, 12, 57. Hamid, A., McClure, W. F., and Whitaker, T. B., Am. Lab., 1981, 13, 108.McClure, W. F., and Williamson, R. E., Beitr. Tabakforsch., 1982, 11, 219. Giesbrecht, F. G., McClure, W. F., and Hamid, A., Appl. Spectrosc., 1981, 35, 210. McClure, W. F., Hamid, A., and Giesbrecht, F. G., Appl. Spectrosc., 1984, 38, 301. Agricultural Engineering, St. Joseph, MI, 1960. Use of Infrared Techniques for Analysis of Bakery Ingredients, Intermediates and Final Products B. G. Osborne Flour Milling and Baking Research Association, Chorleywood, Her tfordshire, WD3 5SH The use of infrared techniques for the analysis of food has expanded very rapidly over the last decade as a result of the availability of purpose-built analyzers for specific applications. Thus, infrared (IR) transmission spectroscopy has been employed for the quality control of milk while near infrared (NIR) reflectance spectroscopy has filled a need for the rapid determination of protein in wheat in situations where market forces have resulted in premiums being paid to farmers based on the protein content.The potential advantages of NIR, in particular, for the analysis of foods,’ and especially wheat, flour and baked products,2 have led millers and bakers to become very interested in potential applications for their businesses. It is the purpose of this paper to review briefly the applications of infrared techniques that are of relevance in the baking industry. The ingredients used, particularly for flour confectionery, are very diverse and infrared methods have been used to analyse many of them. The process intermediates (dough) and final products (bread, biscuits and flour confectionery) have all been analysed by NIR and applications will be discussed under these categories. Bakery Ingredients Bot-h IR and NIR can be used for the analysis of ingredients, although NIR has been the more widely used.The first and probably most important application of NIR in bakeries has been in the evaluation of flour quality.3.4 Thus, the protein, moisture, starch damage (hence water absorption), colour and particle size can be measured with acceptable accuracy. The protein determination is especially satisfactory and a recent development in this area concerns situations where dried wheat gluten is added to flour as a substitute for imported wheat in the bread grist. The protein content (YO) of the base flour (PF)3 and the gluten (PG)s may be measured by NIR in order to calculate the amount of gluten required (W,) to raise the protein content of the supplemented flour by a given amount to Ps by using the formulaDecember, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 487 In addition, the reported calibration3 has been found to be valid for the measurement of Ps.Therefore, it is possible to monitor the protein contents of the base flour and gluten in order to calculate the addition level, and to check the protein content of the resulting flour, all using the same instrument. Fats and shortenings are ingredients common to all baked products and a hybrid infrared technique, known as NIR transflectance, which has been used to predict the iodine value of edible oils,6 may find application for their quality control.The protein, fat and moisture content of meat used in the baking industry in the manufacture of pies and pasties has been accomplished by both IR7 and NIR.8 Meat is a particularly difficult product to analyse by infrared because of its very high water content and the two techniques overcome the problem in different ways. In IR, a double beam procedure is used to subtract the water spectrum from that of the meat, while in NIR this is unnecessary because the much lower absorptivities permit measurements to be made even in the presence of water. The IR instrument used to analyse meat has also been employed for the determination of protein and lipid, and hence the total solids content of liquid egg products.9 The precision is identical to that reported for meat while the accuracy of prediction of total solids is the same as that obtained by prediction from protein and lipid figures obtained by using AOAC standard methods.NIR has been used to control the quality of chocolate products in respect of their fat,") moistureI0 and sucrose11 contents. Chocolate is used as an ingredient in flour confectionery and biscuits and as a coating on these products, therefore such measurements are useful to bakeries. Cheese is used in the production of savoury biscuits and its composition has been estimated in terms of protein, fat, moisture and moisture in non-fatty substance by NIR.12 No significant improvement in accuracy is achieved by freeze-drying the samples rather than simply grating prior to the reflectance measurement.The compositional analysis of milk powders,l3 non-fat dry milk14 and whey powders15 has also been accomplished by NIR. Finally, various chemical additives are used in bread baking and these are often blended together in a pre-mix, called a bread improver, which may be further diluted before being incorporated into the dough. The levels of three additives permitted in the UK, azodicarbonamide, ascorbic acid and L-cysteine, in a model system similar to such a pre-mix have been succesfully measured by NIR.16 Bakery Intermediates Baking processes generally involve the blending of flour with water and other ingredients in order to produce a dough or batter, which is then baked in an oven to form the product. Control of the amounts of ingredients is more usefully carried out by analysing the intermediate as this can be more easily re-cycled and the wastage of energy in baking an unsatisfactory product is avoided.NIR has been applied to the compositional analysis of biscuit doughs and intact dough pieces17 but, while the precision for all four ingredients, fat, flour, sucrose and water, was excellent, only the calibration for fat was sufficiently accurate for the detection of metering errors of +5% relative to the total amount of fat in the recipe. Bakery Products The NIR technique has been applied to a wide variety of baked products including biscuits,17 dried bread and cake mixes18 and sliced white bread.19 The calibrations for biscuits and bread were carried out by using products produced on a pilot scale under carefully controlled conditions and NIR readings were related to the accurately determined masses of each ingredient.The dried mixes, on the other hand, were produced commercially and it was therefore necessary to employ chemical methods for the determination of sample composition for calibration against NIR. The predictive accuracy of the dry mix fat calibration (s.d. 1.7%), for example, was notably worse than that for either the whole biscuits (s.d. 0.60%), biscuit crumb (s.d. 0.34%) or the bread (s.d. 0.18%), and although this may partly have been due to greater variability of the commercial samples with respect to such factors as particle size and homogeneity, it is clear that calibration of NIR against chemical methods introduces a considerable amount of error.No improvement could be achieved in the accuracy of bread calibrations by drying in air and grinding the samples prior to measuring the NIR reflectance. Conclusions Four conclusions can be drawn. Firstly. infrared techniques, especially NIR, can be applied to the analysis of a wide variety of ingredients, intermediates and products of the baking industry. Secondly, the precision of IR and NIR is excellent but their accuracy is often limited by precision of the reference methods against which the infrared techniques are calibrated. Thirdly, in the majority of instances,488 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 little or no sample preparation is required prior to NIR analysis. Fourthly, although different NIR instruments were employed for the reported applications, a single instrument is adequate for the measurement of the same constitutents ( e .g . , protein, fat and moisture) in a variety of different products. This work forms part of a research project sponsored by the UK Ministry of Agriculture, Fisheries and Food, whom the author thanks. The results of the research are the property of the Ministry of Agriculture, Fisheries and Food and are Crown Copyright. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. References Osborne, B. G., Anal. Proc., 1981, 18, 488. Osborne, B. G., Anal. Proc., 1983, 20, 79. Osborne, B. G., Douglas, S . , and Fearn, T., J. Food Technol., 1982, 17, 355. Diachuk, V. R., Hamilton, E., Savchuk, N., and Jackel, S .S., Bakers Dig.. 1981. 55, 72. Krishchenko, V. P., Sozonov, Yu. G., Chuikova, L. A., Valitova, E. G., Rusokova, M. P., and Litvinova, L. I., Agrokhimiya, 1980, 7 , 103. Fearn, F. R. B., “Proceedings of the European Launch of the Technicon InfraAlyzer 500.” 1982, Technicon Instrument Co. Ltd., Basingstoke, Hampshire, 1982. Bjarnei, 0. C., J . Assoc. Off. Anal. Chem., 1982, 65, 696. Kruggel, W. G., Field, R. A., Riley, M. L., Radloff, H. D., and Horton. K. M., J. Assoc. Off. Anal. Osborne, B. G., and Barrett, G. M., J . Food Technol., in the press. Miner, D. C., Ziomek, J. V., and Landa, I. J . , Technical Paper NIR 4008, Pacific Scientific, Silver Spring, Roberts, G., “Neotec International News Items. 1980, No. 12,” Pacific Scientific, Silver Spring.MD, USA, Frank, J. F., and Birth, G. S . , J . Dairy Sci., 1982, 65, 1110. Gilkison, I. S . , J . Sci. Food Agric., 1983, 34. 1026. Baer, R. J . , Frank, J. F., and Loewenstein, M., J . Assoc. Off. Anal. Chem., 1983, 66, 858. Baer, R. J., Frank, J. F., Loewenstein, M . , and Birth, G. S . , J . Food Sci., 1983.48, 959. Osborne, B. G., J. Sci. Food Agric., 1983,34, 1297. Osborne, B. G., Fearn, T., Miller, A. R., and Douglas, S . , J . Sci. Food Agric., 1984. 35, 99. Osborne, B. G., Fearn, T., and Randall, P. G., J . Food Technol.. 1983. 18,651. Osborne, B. G., Barrett, G. M., Cauvain, S. P., and Fearn, T., J. Sci. Food Agric., in the press. Chem., 1981, 64, 692. MD, USA. 1980. Progress in Human Food Analysis by Near Infrared A. M. C. Davies AFRC Food Research Institute, Colney Lane, Norwich, NR4 7UA The Food Research Institute has been investigating the application of near infrared (NIR) analysis since 1980.We are equipped with a Neotec 6350 scanning spectrometer and a Technicon InfraAlyzer 400R filter instrument, and at various times we have borrowed Neotec 102 and Dickey - John Instalab 800 filter instruments. The equipment is sited in a recently refurbished, air-conditioned laboratory. Our work can be summarised under three headings: calibrations by regression analysis against conventional analytical techniques; development of new methods of wavelength selection; and new methods of utilising NIR data. Calibration by Regression Analysis The method developed by Norris’J has been well documented by McClure,3 Williams4-5 and Osborne.6.7 We have developed methods for the analysis of protein,# lipid and starch in pea flour, oil and egg in salad cream,g and, in collaboration with Campden Food Preservation RA (who have a MAFF funded project), the method is being extended to a wider range of food types.The analysis of components of a shrink wrapping laminate used by General Foods for the packaging of coffee for drink dispensing machines constitutes one of the more unusual applications. The NIR spectra (Fig. 1) were obtained by placing the laminate or components of the laminate in the incident beam, which is then diffusely reflected by the reference tile into the detector block (commonly known as transflectance). While the spectrum of the laminate did not at first appear very interesting, the spectra of the components are all different.It has been possible to obtain calibrations by making up samples with different numbers of layers of the component films.December, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 489 k Laminate Polypropylene 1 I I 1200 1400 1600 1800 2000 2200 2400 Wavelengthinm Fig. 1. NIR scans of packaging laminate and its components. Advances in Wavelength Selection The scanning instrument records spectra at 2-nm intervals over the range 1100-2500 nm, giving 700 data points per spectrum. With so many variables it is only possible to carry out forward regression analysis, and our present computer program is totally dependent on the choice of the first wavelength. In his original work, Norris used pairs of wavelengths but we believe that rather than choosing them separately they should be selected as a pair.However, with 700 variables there is a choice of 244 650 pairs, a considerable computing task. We have developed a colour graphics system10 for displaying colour coded regression coefficients, which reduces the computer’s task and simplifies the identification of promising pairs of wavelengths, Regression coefficients are calculated at 40-nm intervals over the complete range, colour coded and displayed as a triangle of 630 small squares. A Expansion of one element of the summary view, 80 nm range at 2 nm intervals Summarised view of complete data, 1100-2500 nm at 40 nm intervals I Fig. 2. Coded display of regression coefficients from NIR data using pairs of wavelengths.490 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal.Proc., Vol. 21 cursor is used to locate the squares with the colour that indicates a high correlation, and the wavelengths obtained are entered into a second program which computes regression coefficients at 2-nm intervals over an 80-nm range centred on the selected square. This produces a triangle of 820 squares and 20 such triangles can be computed in one overnight run. By using the cursor control, the expanded triangles are selected from the summarised view and displayed one at a time. The cursor is then used to determine the wavelengths of the most promising pair from that region of the data. Fig. 2 is an indication of the colour display. New Methods of Utilising NIR Data The main interest in NIR at FRI is in its use for on-line process control in the food industry.However, there is a serious difficulty in applying NTR to processes using the conventional method of calibration, viz., regression analysis. For regression analysis to work there must be a wide range of the analyte of interest. Processes are normally stable only over small ranges and food manufacturers are reluctant to produce large amounts of out of specification material. The usual way of overcoming the ( a ) 1 I 1 I 1200 1400 1600 1800 2000 2200 2400 Wavelengthinm I I I 1200 1400 1600 1800 2000 2200 2400 Wavelengthhm 4 1 I 1 1200 1400 1600 1800 2000 2200 2400 Wavelengthinm ( d) I , I I 1200 1400 1600 1800 2000 2200 2400 Wavelengthhm Fig. 3. Comparison of instant coffee samples by NIR: ( a ) , NIR spectra of coffee samples; ( b ) , second differential spectra of coffee samples; (c), difference plots of averaged (coffee - average coffee) - - - - -, and averaged (decaffeinated coffee - average coffee) -; ( d ) , difference plot of averaged (decaffeinated coffee - average coffee) -, and second differential plot for caffeine - - - - -.December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 491 problem is to formulate small batches of material in the laboratory, using these to produce a calibration, which requires a correction before the on-line process can be predicted, and it may still end in failure.Rather than use this procedure we are looking for methods of using the NIR data as the specification for the product. We have not yet developed a system but it is possible to demonstrate that NIR data can be used to discriminate between very similar products. A set of instant coffee samples has been used as a test case to represent a process.These samples were obtained as retail purchases of the same product over a number of weeks from different outlets. For out of specification samples we are using a decaffeinated product made by the same manufacturer. The NIR spectra of the two products appear very similar [Fig. 3(a)] and only small differences can be seen between second differential scans [Fig. 3 ( b ) ] . However, if an averaged coffee spectrum is computed (C) and then subtracted from a number of (different) coffee spectra (C) and decaffeinated coffee (DC) spectra, which are then averaged and plotted, it can be seen [Fig.3(c)] that the average (DC - C) is much more variable than the average (C - C). It is pleasing to see that many of these differences are very similar to a second differential of the caffeine spectrum, Fig. 3(d). The large signal around 1940 indicates a difference in moisture. These results are encouraging us to search for the statistical methods that will be needed to utilise the differences. The application of Fourier transformation to NIR data has been pioneered by Professor W. F. McClurel1.12 and we are collaborating with him in the hope that this approach will simplify the statistical problem. Conclusion Near infrared analysis of food is in a rapid growth phase; much has been achieved since Norris’s original work but the actual application of NIR to food product control is only just beginning.2 . 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. References Norris, K. H., “Proceedings of the 1963 International Symposium on Humidity and Moisture,” Volume 4, Reinhold Publishing Corporation, New York, USA, 1965, p. 19. Ben-Gera, I., and Norris, K. H., Israel J. Agric. Res., 1968, 18, 125. McClure, W. F., Norris, K. H., and Weeks, W. W., Beitr. Tabakforsch., 1977, 9, 13. Williams, P. C., Stevenson, S. G., Starkey, P. M., and Hautin, G. C., J. Sci. Food Agric., 1978, 29,285. Williams, P. C., and Thompson, B. N., Cereal Chem., 1978, 55, 1014. Osborne, B. G., J . Food Technol., 1981, 16, 13. Osborne, B. G., J. Sci. Food Agric., 1983, 34, 1297. Davies, A. M. C., and Wright, D. J., J. Sci. Food Agric., 1984. 35, 1034.Davies, A. M. C., in preparation. Davies, A. M. C., Gee, M. G . , and Foster, P. W., Lab. Pract., 1984, 33, 78. Giesbrecht, F. G., McClure, W. F., and Hamid, A., Appl. Spectrosc., 1981, 35, 210. McClure, W. F., Hamid, A.. and Giesbrecht, F. G., Appl. Spectrosc., 1984, 38, 301. Some Aspects of the Near-infrared Reflectance of Potato Tubers R. L. Porteous and A. Y. Muir Scottish Institute of Agricultural Engineering, Bush Estate, Penicuik, Midlothian, EH26 OPH Work at SIAE on the spectral reflectance of potato tubers is aimed at the possible application of optical techniques to automatic quality grading. To this end, measurements of the diffuse reflectance, over the range 600 to 1850 nm, have been made on hundreds of tubers. These were selected from many varieties and included examples of most of the disorders found in this crop.A typical series of spectra is shown in Fig. 1. This illustrates the progress of disease in a tuber inoculated with gangrene. Data Analysis These data have been analysed by using the statistical techniques of factor analysis and discriminant analysis.l.2 Only three factors, Fig. 2, have been found necessary to account for almost the entire gamut of reflectance profiles produced by different diseases. The effects of any disease can be reproduced by adding suitable proportions of these curves to the spectrum of a healthy tuber. In Fig. 3 the first factor has been re-scaled to demonstrate its close affinity with the transmission curve of water. A comparison is also drawn between the third factor and absorption curves for chlorophylls.The contribution of these factors to different diseases confirm that the first is related to492 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS loo 7 Anal. Proc.. Vol. 21 01 L 1 500 1000 1500 Wavelengt hln m Fig. 1. Progressive effect of gangrene on an inoculated tuber. -, Lines are: -, day 6; - - -, day 14; - - - - -. day 28; . + - day 52. the water state of the tuber and the third to the presence of green pigmentation. The second factor is presumably associated with degradation of the tissue structure. A search through the range of spectral types for features characteristic of various disorders has shown that the presence of some diseases can be deduced from measurements of reflectance at only a few wavelength’s. 1.2 / n o 2000 -0.4 I 500 1000 1500 Wavelengthlnm Fig.2. Principal factors obtained by factor analysis. Lines represent: -, factor I ; - - -, factor 2; - - - - -, factor 3. Experimental Machine A computer controlled machine has been built to assess the usefulness of this technique for quality grading. This machine measures the reflectance, in eight narrow bands between 650 and 1680 nm, of 1550 small elements of area distributed over the tuber surface. Non-uniform elements are ignored. A judgement of the condition of each of those remaining is computed and stored. This judgement is made by comparing the ratios of energy in certain bands with the ratios recorded from a selected “standard” tuber. The output from the machine can take a number of forms. Table I is a printout from a program which compares machine verdicts with those of an inspector. This program can recognise six types of defect. The first four runs are repeated measurements on one tuber compared with assessments by differentDecember, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 493 500 1000 1500 Wavelengthhm 2000 Fig. 3. chlorophylls. Comparison of factors 1 and 2 with properties of water and inspectors. Instrumental repeatability is seen to be good in this test and the results in general are in reasonable agreement with the inspectors’ judgements. The final column of the table records the proportion of sampled areas accepted as being sufficiently uniform. TABLE r COMPUTER OUTPUT COMPARING PERFORMANCE OF INSPECTORS AND MACHINE IN RECOGNISING DEFECTS ****************************************************~*****************************~ * PROGRAM C87. INSPECTOR/MACHINE COMPARISON ****************************************************************~~***************** * TUBER * PERCENTAGE AREA IN EACH DEFECT CATEGORY * % * AREA * * NUM0ER * CLEAR GREEN NEWCUT OLDCUT SOILED SOFTROT COMSCAB USED * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VISUAL ESTIMATE 1 25 0 20 0 20 0 MACHINE VERDICT 29 0 24 0 4 0 UISUAL ESTIMATE 1 20 0 20 0 20 0 MFICHINE VERDICT 27 0 20 1 6 0 VISUAL ESTIMATE 1 20 0 20 0 10 0 NACHINE VERDICT 28 0 22 1 4 0 V1SUF)L ESTIMCITE 1 25 0 20 0 15 0 MACHINE VERDICT 27 0 21 1 5 0 UISUAL ESTIMATE 2 84 0 0 1 12 0 MACHINE VERDICT 81 0 e 0 10 0 VISUAL ESTIMATE 3 0 77 0 9 4 0 MACHINE VERDICT 0 7 5 0 0 6 0 VISUAL ESTIMATE 4 2s 0 0 0 5 60 MACHINE VERDICT 33 0 2 0 1 60 35 43 82 40 45 81 50 46 83 40 46 83 3 2 8 4 10 19 61 10 5 72 References 1. 2. Porteous, R. L., Muir, A. Y . , and Wastie, R. L., J . Agric. Eng. Res., 1981, 26. 151 Muir, A. Y . . Porteous, R. L.. and Wastie. R . L.. J . Agric. Eng. Res., 1982. 27, 131
ISSN:0144-557X
DOI:10.1039/AP9842100482
出版商:RSC
年代:1984
数据来源: RSC
|
8. |
Meat Research Institute light probe for stressed meat detection |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 494-500
Douglas B. MacDougall,
Preview
|
PDF (1020KB)
|
|
摘要:
494 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 Meat Research Institute Light Probe for Stressed Meat Detection Douglas B. MacDougall Meat Research Institute, Langford, Bristol, BS18 7DY Muscle cut immediately after slaughter is translucent, dark and sticky to touch. Negligible change occurs in its appearance during the early stage of post-mortem glycolysis from the initial value of muscle pH of approximately 7.0 until about pH 5.9, after which there is a progressive increase in opacity as the pH falls to to its ultimate value of about 5.5, when the meat assumes its bright semi-opaque pink or red colour and the surface becomes moist or wet.' Two faults occasionally occur in fresh meat because of environmental or physiological stress in the live animal: the dark cutting (DC) or dark, firm and dry (DFD) condition in beef and pork, respectively, and the pale, soft and exudative (PSE) condition in pork. The former occurs if reserves of muscle glycogen have been so depleted by pre-slaughter exhaustion that insufficient lactic acid is produced for the pH to fall below 6.0 and the muscle maintains its pre-rigor mortis translucent dark appearance.2-4 The PSE condition obtains if an ultimate pH of 5.3-5.5 is reached while the carcass is still warm.5 Denaturation of sarcoplasmic and myofibrillar proteins6.7 results in decreased water-holding capacity, excessive drip and a large increase in the light-scattering power of the meat.* In pork the PSE condition is a consequence of the porcine stress syndrome, being more prevalent in breeds that are lean with low fat and high meat yield.9 Such pigs exhibit extremely fast post-mortem glycolysis with ultimate pH often being attained in less than 1 h after slaughter.Measurement of pH at 45 min after slaughter is therefore used to detect PSE pork, but the technique is not completely reliable10 and has the associated hazard of electrode breakage on insertion into the carcass. The PSE condition is not confined to pork but occurs in beef in those parts of the carcass that cool so slowly that ultimate pH is reached in muscles that are still warm." Light Scatter The optical property that distinguishes DFD and PSE meat from normal is the light-scattering component of reflectance. Reflectance at infinite thickness (R,) is related to the absorption and scattering coefficients ( K and S ) of the Kubelka - Munk analysis as follows: K/S = (1-Rm)2/2R.K is related to pigment concentration and S to the light-scattering properties of the muscle proteins. Determination of S by the usual technique of measuring thin sections of meat* is time consuming and cannot be applied directly to the carcass. However, S can be estimated from the intensity of back-scattered light provided the relative effect of K on S is small. This is possible in meat at wavelengths above 650 nm where absorption by haem pigment is minimal. Fibre Optic Probe The principle of the MRI fibre optic probe (FOP) is that of the endoscope. The instrument consists of a gun-shaped handle, display unit and rechargeable battery pack. Light from a tungsten filament lamp is transmitted into the meat via a fibre optic in the probe and is emitted from a 3 mm diameter window in the side of the sharp tip.Back-scattered light is returned by the fibre optic to the photodetector (peak response, 900 nm) in the handle. The FOP is calibrated with blocks of translucent plastic whose scattering properties are similar to those of meat. FOP Linearity Approximately 100 pork and 100 beef samples were selected on the basis of pH and location in the carcass to provide a range of material from DFD to PSE with widely differing pigment concentrations. In addition to measurement of the FOP value, pigment concentration,'* tristimulus value Y and K and S (mm-l) for Y were determined for each sample." The range in K (mm-1) for pork was much less than for beef because of the smaller range in pigment concentration.The dependence of absorption on pigment was as follows: for pork: K (mm-1) = 0.20 + 0.0037 [haematin] (pg g-1) variance accounted for = 24% ; K (mm-1) = 0.53 + 0.039 [myoglobin] (mg g-1) variance accounted for = 48%. The range in FOP values was similar for each species, <16 to >95, but the relationship of FOP value for beef: to Y was highly significantly different for species because of the effect of pigmentation on Y:December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS for pork: FOP value = -18.41 + 2.71 Y% variance accounted for = 76% ; 495 for beef: FOP value = -14.0 + 6.04 Y% variance accounted for = 56%. As the FOP value is based on detection in the near-infrared region where the effect of pigmentation is much less than in the heavily weighted “green” region used to calculate Y , the relationship of FOP value to S (mm-1) would be expected to show a distinct improvement over the relationship to Y.for pork: FOP value = 9.72 + 143.2 S (mm-l) variance accounted for = 85%; for beef: FOP value = 0.26 + 233.6 S (mm-l) variance accounted for = 84%. The FOP therefore cannot be described as a lightness or colour meter, but should be regarded as an instrument for measuring opacity or light scatter. Abattoir Trials Several trials were carried out in commercial abattoirs to assess the effectiveness of the FOP in detecting PSE and DFD pork, of which the following is typical. The pH of the M. longissimus dorsi of approximately 900 pigs was measured 45 min after slaughter (pH1) and on the following day (pHu), when FOP values were also obtained. FOP values were related to both pH1 and pHu.Meat with values <18 and pH >5.9 was DFD. Carcasses visually assessed as PSE whose pH1 was <6.0 had FOP values >40 with most >50. However, several did not fall unambiguously into either PSE, DFD or normal on the basis of pH and a single FOP measurement. This led to the working recommendation that carcasses with values <18 should be classed as definitely DFD, those with values between 25 and 40 as normal, those with values >50 as PSE and those >60 as severely PSE. For carcasses with FOP values between 18 and 25 and between 40 and 50 additional measurements should be taken in adjacent parts of the muscle.If any of the additional readings are >45 or <20 then the meat is classed as PSE or DFD, respectively. Such meat is often two-toned in appearance. Discussion Until the development of the FOP the only effective technique for detecting either PSE or DFD meat without cutting the carcass was by insertion of a pH electrode into the meat but, because of the difficulties associated with pH measurement, it is not often used routinely in abattoirs. Opacity development is not related solely to rate or extent of pH fall but also to the interaction of temperature with pH as affected by the chilling rate. The FOP has advantages over pH measurement in that it assesses directly the optical property that causes paleness or darkness, it is a good indicator of potential water holding capacity and the instrument is robust.The results in this paper were obtained using the MRI prototype instrument. A commercial version of the instrument is available (TLB Fibres, Leeds). 1. 2. 3. 4. 5 . 6. 7. 8. 9. 10. 11. 12. 13. References MacDougall, D. B., Food Chem., 1982, 9, 75. MacDougall, D. B., and Rhodes, D. N., J . Sci. Food Agric., 1972, 23, 637. Lister, D., and Spencer, G. S. G., in “The Problem of Dark Cutting in Beef,” Martinus Nijhoff, The Hague, MacDougall, D. B . , and Jones, S . J . , in “The Problem of Dark Cutting in Beef,” Martinus Nijhoff, The Bendall, J. R., and Wismer-Pedersen, J., J . Food Sci., 1962,27, 144. Scopes, R. K., Biochem. J . , 1964, 91, 201. Penny, I. F., J. Sci. Food Agric., 1977, 28, 329.MacDougall, D. B., J . Sci. Food Agric., 1970, 21, 568. Cheah, K. S . , Cheah, A. M., Crosland, A. R., Casey, J. C., and Webb, A. J., Meat Sci., 1984, 10, 117. Barton-Grade, P. A., in Froystein, T., Slinde, E., and Standal, N., Editors, “Porcine Stress and Meat Taylor, A . A., Shaw, B. G., and MacDougall, D. B., Meat Sci., 1980-81, 5 , 109. Hornsey, H. C., J . Sci. Food Agric., 1956, 1, 534. Judd, D. B., and Wyszecki, G., “Color in Business, Science and Industry,” Third Edition, Wiley, New York, 1981, p. 129. Hague, 1981, p. 328. Quality,” Agricultural Food Research Society, As, Norway, 1981, p. 205. 1975, pp. 139 and 420.496 Meat Composition by Video Image Analysis P. 6. Newman QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 21 Meat Research Institute, Langford, Bristol, BS18 7DY - Fat threshold -I Video image analysis is not a new technique, it has been around for over a quarter of a century.It was first developed for military use, particularly in the area of photo reconnaissance. Its earliest commercial applications were in the metal industry, inspecting components such as seams in tin can production. Whilst its use in commercial and industrial environments has continued to expand and diversify, its application to problems in the meat industry is of fairly recent origin. The accurate measurement of fat/lean content has always been difficult to achieve in the buying of meat, where small discrepancies in the fat content of large consignments can cause problems for the wholesaler, who will often “over-lean” a consignment to avoid specification penalties, and for the purchaser, who finds himself overpaying for meat which is “overfat.” Similarly, the processor who is unable to maintain accurately the composition of his meat products often faces the problem of variability in quality and process control. The development of a technology that was capable of measuring, in a quantitative way, the fat and lean contents of meat, whilst at the same time being able to use this information to control a processing operation, would be beneficial to many in the meat industry.A number of systems have been developed for measuring fatllean contents in commercial meat processing operations. These include the specific gravity technique of the Protecon “Palm,” the X-ray “Anal-Ray,” the infrared “EMME” and the use of ultrasound.’ However, each of these has its own particular limitations, the most important being that they are all either batch or discontinuous sampling systems relying on representative sampling. The heterogeneity of boneless processing meat and the variability of commercial sampling techniques has already been well established.2.3 Therefore, for the best results, a continuous, rapid, non-destructive method which is capable of sampling all or much of the material is desirable.Video image analysis is ideal for this purpose and satisfies all of the criteria. Principle of Operation The technique of video image analysis (VIA) for a monochrome system is a simple one, although the hardware to achieve it is relatively sophisticated.Light from the meat strikes the mosaic of the video camera tube where the individual elements of the mosaic are charged relative to the intensity of the light each receives. The number of elements in the mosaic will determine the over-all resolution. The signal transmitted to the image analyser is generally an analogue one resulting from an electron beam scanning the video tube, producing a voltage at each element of the mosaic proportional to its charge. The darker the object, the less the charge and the lower the corresponding voltage produced (see Fig. 1). To reproduce the image on a monitor the reverse process occurs. Colour video analysis is basically similar, although the picture is built up with the signals from three primary colour tubes rather than a single one.Field of view 1 1 --, ---f- Backs jround Vo I tag e t I I Sample threshold I Time b Fig. 1. Principle of scanning.December, 1984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 497 This type of analysis produces a “grey-scale” image, which is typified by the normal monochrome television picture of white and black with varying shades of grey. However, when it is only necessary to separate a few components, e.g., fat and lean, this type of approach provides too much image information. Another approach, that of binary imaging, is more sensible. With this technique, “cut-off” points are established corresponding to the signal levels which separate fat from lean and meat from background. In this way, the only analysis necessary is whether the signal from an individual picture element (pixel) corresponds firstly to meat rather than background, and secondly to fat or lean.Thus real-time analysis is possible, in this instance, meat on a moving conveyor. Because all the samples are different and the image is continuously changing, this type of two dimensional full-frame analysis is needed. Where the objects are of constant shape and it is only necessary to identify changes in one dimension, such as width or length, line analysis is both quicker and more economical. It is in this area that the development of high-speed digital cameras is making the greatest progress. Uses in the Meat Industry As both the cost of meat and the popularity of convenience foods steadily rise, the uses of VIA in meat processing continue to increase.Whilst originally designed to measure the fat/lean content of fresh or frozen boneless meats in a variety of forms (e.g., sliced, diced or broken pieces), the technology has been developed so that the image anaiyser is now able to control processing operations in which meat is a major ingredient. A further refinement is allowing computerised “least-cost” formulation programmes to communicate with the analyser, thus achieving the most economic and flexible process control. Other variants, such as portion control and the measurement of bone-in cuts, are also being developed. Recently, it has become possible to measure the fat content of particulate meats (down to 4 mm) with VIA. This has been achieved by removing the problems of fat smear and drip stain using controlled conditions during mince production, and improving separation with better optics and the use of limited wavelength ultraviolet light, which causes fat to fluoresce; connective tissue has similar properties, but can be separated from the fat by the image analyser.The processing industry. particularly on the continent, has been showing an increasing interest in the measurement of lipid content rather than fat. The problems with obtaining representative samples from large-scale processing operations have already been mentioned. However, as a result of obtaining detailed information on the content and distribution of lipid in processing beef, it is now possible accurately to predict lipid content from VIA dataJ (Table I). TABLE I All values are o/o of total meat.Batch-mass = 450 kg. COMPARISON OF VIA PREDICTED LIPID WITH CHEMICAL ANALYSIS VIA fat VIA predicted Chemical Batch area lipid lipid s.d. n A . . . . 41.80 33.55 33.96 5 0.48 40 B . . . . 38.50 30.65 31.35 IL 0.88 40 c . . . . 37.63 29.71 29.38 * 0.72 40 D . . . . 37.93 29.89 29.99 2 0.44 40 E . . . . 39.83 31.31 30.04 +_ 0.95 40 Finally, a look into the future. The Meat Research Institute is currently investigating the use of VIA in two new applications. In common with other methods of assessment in the meat industry, carcass grading is subjective. By making it objective, it is hoped not only to remove the variability inherent in any subjective system but also to use the information, together with data derived from many years experience of carcass dissection and retail butchery, to predict carcass composition.Secondly, in an industry that has high fixed overheads, there is considerable scope for automation. To this end, the MRI is looking at the application of VIA and robotics in boning operations. Clearly, video image analysis will have an increasingly important role to play as the use of modern technology expands in the meat industry. References 1. 2. 3. 4. Miles, C . A., and Fursey, G. A . , Food Chern., 1977. 2, 107. Newman, P. B . , Meal Sci., 1984, 10, 87. Newman, P. B., Meat Sci., 1984, 10, 161. Newman, P. B . , Meat Sci., 1984, submitted for publication.498 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS Anal. Proc., Vol. 22 The Food Research Institute Portable Pendulum: the Physical Properties of Potato Tubers in Relation to Tuber Fracture Damage A.Grant and J. C. Hughes Food Research Institute, Colney Lane, Norwich, NR4 7 UA Mechanical damage to potatoes is cumulative and can occur each time potatoes are handled. It is a major problem in Europe and North America (Potato Marketing Board,' Smittle et a1.2) and leads to direct losses on lifting, secondary infections during storage and increased labour costs during grading and processing. Impact damage, which is one form of mechanical damage caused by collision, is determined by the level of abuse encountered during harvest and subsequent handling and by the inherent susceptibility of potatoes to damage, which depends on their mechanical and rheological properties.3 In impact damage studies, a key requirement is a satisfactory test for assessing this inherent susceptibility and one such test involves the use of pendulums.Pendulums need not be very large in order to reproduce impact conditions over the wide range of velocities and energies sustained by potatoes (and other biological material) during harvest and handling, and moreover, in addition to standard conditions of impact, they can provide information on changes occurring in the tuber during impact. The Food Research Portable Pendulum The Food Research Institute (FRI) has recently developed and built a portable pendulum (Hughes et al.4) for use both in the laboratory and in the field (Fig. 1). The instrument consists of two main parts, firstly a pendulum with an angular displacement transducer, a sample holding system and arm release mechanisms, and secondly, a box containing the battery powered electronics, including control and display units, connected by cable to the angular displacement transducer.The angular displacement transducer, which is attached to one end of the spindle, is used to determine the deformation of the samples during impact (dl), non-recoverable deformation (4) and rebound height of the arm after impact, which is used to calculate the energy absorbed (Eab) by the tuber during impact. These data are displayed by a single digital readout and selector switch. The duration of sample penetration by the indentor ( t l ) and the time taken for the indentor to return to the point of initial contact with the tuber (t2) are shown on individual displays.Experimental In order to assess the performance of the pendulum in damage susceptibility tests, i.e., the reliability of impact properties Eab, d l , d2, tl and t2 as indicators of damage caused by the pendulum impact, the Fig. 1. The portable pendulum instrument. Key: 1, pendulum base made of aluminium angle (50 X 50 mm) and plate (6 mm), over-all dimensions 62 x 23 x 4 cm; 2, pendulum frame 2.54 cm square section 16-gauge steel tube, over-all height 50 cm; 3 , bolts; 4, horizontal spindle with angular displacement transducer attached; 5 , pendulum arm; 6, detachable indentor; 7, weights; 8, metal plate calibrated in indentor drop height 5-40 cm; 9, arm release catch; 10, sample holding device; 11, supporting metal plate of holding device with hole 22 mm in diameter; 12, control and display unit.December, I984 QUALITY CONTROL OF FOODS AND NATURAL PRODUCTS 499 instrument was tested in the field and in the laboratory.At each of 12 farms, 20 representative tubers were selected from 2&30 hand-dug roots and impacted in the field. In the laboratory, material from a variety trial containing some potentially damage-susceptible seedlings and some control varieties were tested after storage at 5 "C (11-15 tubers per sample). Whole tubers were held under 5 kg load and impacted on the flattest, middle region using a 6.35mm radius of curvature indentor. The kinetic energy of impact was 0.7325 and the velocity 2.602 m s-1. These conditions were achieved by dropping the pendulum arm, with 150 g added weight, 30 cm and were similar to a 216 g tuber falling 35 cm on to web rods.They were chosen to produce fracture damage as opposed to blackspot,4 which under pendulum tests occurs at lower energies and drop heights. After impact, tubers were stored at 20 "C and 60% relative humidity for 14-19 days prior to damage estimation. Samples damaged externally by the spherical indentor of the pendulum usually showed a circular pattern of damage except for splits where fissures radiated out from the circle of crushed tissue. External damage was therefore measured on a 0-4 scale, i.e.: 0, no external damage; 1, slight skin breakage; 2, skin breakage and slight crushing; 3, crushing and slight splitting (4 mm splits); 4, severe crushing and splitting (>5 mm splits). Internal damage was measured on tubers that were sliced transversely through the centre of the externally damaged zone, taking care not to cut along any splits.The width and depth of the damaged zone was measured and the type of damage noted, i.e. : crushing, a discrete zone of damage, often seen as a light coloured area ringed by a distinct border of dark colour; and cracking or internal fissures, small cracks usually radiating from the skin. Results and Discussion After impact with the pendulum, the material in this experiment showed a very wide range of damage, similar in type to fracture damage found in the field, i . e . , crushing, internal and external cracks. The damage ranged from slight surface damage with slight internal crushing in the more resistant material to severe external damage and deep internal cracks in the more susceptible tubers.Similarly, a wide range of readings was found for the physical factors (Eab, dl, d2, tl, t2) that were obtained instantaneously after impact and were all inter-related and highly correlated with one another. The correlations between the mean values of these physical factors and damage ranged from r = 0.8113 ( d , versus width) to r = 0.9657 (d2 versus external score). The highest correlations were generally found between damage and either energy absorbed (Eab) or permanent deformation of the tuber (dz), which, respectively, explained 91% and 93% of the variation in external score. However, of the two physical factors, energy absorbed may be the more practical indicator of damage because permanent deformation measurement is more subjective, i.e., requires the holding of the indentor, by hand, against the impacted zone after impact.A small improvement in the correlations was obtained when multiple regressions were calculated for physical factors against external score + width of Damage rating = 0.045 91 + 0.923 48 ext. 21 score + 0.145 73 width + 0.044 01 depth 0.57 12 13 14 15 16 17 18 Damage rating 0'5gr1 I I I I Fig. 2. Relationship between energy absorbed and damage rating (means).500 QUANTITATIVE NMR Anal. Proc., Vol. 21 internal damage + depth of internal damage (damage rating). An example for energy absorbed against damage rating is shown in Fig. 2. Conclusion The FRI portable pendulum provides a highly accurate and instantaneous prediction (on the basis of energy absorbed) of the total’ amount of fracture damage occurring in tubers impacted under precisely controlled conditions and as such may provide a direct test for screening-out in breeding programmes at variety trials material that is more susceptible to fracture damage than currently acceptable varieties. It may also have a use in field trials or even on farms for assessing the optimum time to harvest. References 1. 2. 3 . 4. “National Damage Survey 1973,” Potato Marketing Board, London, 1974. Smittle, D. A . , Thornton, R. E.. Peterson, C. L., and Dean, B. B . , Am. Potato J . . 1974, 51. 152. Gray, D., and Hughes, J . C., “Tuber Quality,” in Harris, P. M., Editor, “The Potato Crop. the Scientific Hughes, J. C., Grant, A . , Prescott, E. A. H., Pennington, D. E., and Worts, W. H., “A Portable Pendulum Basis for Improvement,” Chapman and Hall, London, 1978, pp. 504-544. for Testing Dynamic Tissue Failure Susceptibility of Potatoes,” 1984. in preparation.
ISSN:0144-557X
DOI:10.1039/AP9842100494
出版商:RSC
年代:1984
数据来源: RSC
|
9. |
Quantitative NMR |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 500-506
I. S. Mackenzie,
Preview
|
PDF (585KB)
|
|
摘要:
500 QUANTITATIVE NMR Anal. Proc., Vol. 21 Quantitative N M R The following are summaries of two of the papers presented at a Joint Meeting of the South East Region and the Special Techniques Group held on May 23rd, 1984, at the BP Research Centre, Sunbury. Theoretical Aspects of Quantitative MMR I. S . Mackenrie Physics Department, Manchester University, Manchester, M 13 9PL This paper concerns some aspects of pulsed nuclear magnetic resonance which are basic to most quantitative experiments. Sample dependent factors such as nuclear relaxation times and Overhauser enchancements,'J although important in quantitative work, are not discussed here. In quantitative analysis we are interested in comparing the magnetisations of nuclei in different chemical environments. These magnetisations are rotated by the NMR excitation and resolution of the various contributions rests on the fact that the areas of the lines in the Fourier transform of the free induction decay are in one to one correspondence with the initial magnitudes of the contributing transverse components.Thus, we are concerned with the extent to which the transverse components reflect the original static magnetisations of the species and ultimately with the precision of area measurements on the frequency spectrum. Free Induction Decay The FID consists of a superposition of NMR signals, each of which can be described by a magnitude, phase and characteristic frequency and decay time, and, of course, noise. Our ability to recover the magnitude of a particular component from the area of its absorption mode spectrum depends on a knowledge of its phase.The main sources of differential phase shifts beween components of the FID are the nature of the excitation, filtering procedures and the delay in recording. Phase shifts for tipping pulses that have rectangular envelopes can be computed straightforwardly3 and are found to be more or less linear in frequency across the spectrum. Furthermore, the phase spreading due to spatial inhomogeneities in the applied rotating field (B,) turns out to be an unimportant effect (partly because the contribution, to the FID, of spins away from the RF field maximum falls off with B14). Phase shifts for more realistic (non-rectangular) pulse envelopes are unlikely to yield to a simple linear phase correction across the spectrum.Loss of magnitude of transverse magnetisation for spins offset from the excitation frequency is a more important effect. For 90" pulses this can be about 2% for an offset Av, such that 4Av.t = 1 (where z is the duration of a 90" pulse), with respect to Av = 0. For a 60" tip angle the corresponding figure is 3%. It is clearly desirable to minimise all of these effects by working with an RF field amplitude which brings the range of offset parameters, Avt, for the nuclei under study to a tolerable value, This can be expensive in excitation power requirements because the need to reduce the coil ring time commensurately with the desired reduction in pulse length means that reduction o f t by aDecember, 1984 QUANTITATIVE NMR 501 factor of 2 requires a factor of 23 increase in power.If two resonances to be compared are so offset from each other that differential effects are important then it is desirable to set the excitation frequency mid-way between the lines; the effects of non-ideal excitation will then be the same for both signals, which will appear with equal and opposite phase shifts. This phase shift can always been corrected exactly, for the two lines of interest, or it may be left as a parameter in a line fitting procedure. Signal Processing The discrete Fourier transform of samples of a continuous function produces a faithful representation of sampled values of the continuous Fourier transform only if the function is both time and band width limited (a theoretical abstraction). The failure to meet this criterion produces artefacts in the Fourier transform; these can be minimised by taking steps to ensure that the signal energy outside the desired band width is a negligible fraction of the total signal energy.The filtering procedure that achieves this limitation produces rapid changes of amplitude and phase shift near the band edge, which is therefore a region to be avoided in quantitative measurements. When dealing with resonances with different decay times it is the area under the frequency spectrum that is of interest in quantitative measurement. The relationship between the total area and the zero time amplitude of the FID implies that the uncertainty in the area is the same as the uncertainty in the initial FID amplitude. Because of this, manipulations in the time domain, such as exponential filtering, which do not affect the zero time value of the FID, cannot reduce the uncertainty of area measurement in the frequency domain. The improved appearance, in signal to noise ratio (S/N) terms, of the frequency spectrum achieved by filtering is deceptive as far as area calculations are concerned because correlations have been introduced in the noise frequency spectrum. The S/N in the frequency domain is at a maximum for sampling duration T = 1.25T2.1 Beyond this time the increased accumulation of noise with respect to signal leads to a reducing S/N at the line peak.However, where area calculations are concerned, the increased number of data points obtained in the frequency domain as T is increased exactly compensates for the increasing uncertainty in the data values and the fractional uncertainty in the computed area reaches a constant value independent of T.Thus, in what follows, we will relate uncertainties in the computed area to the maximum achievable (S/N)max, of the line peak although operationally, in order to achieve a sensible number of points per line width ( 2 3 ) , 2 we shall be considering acquisition times T 3 9T2 well beyond the time which optimises (S/N) at the peak frequency. In the discussion of area computations we assume that systematic base line errors have been removed. Numerical Integration The signal ( A ) to noise (oA) ratio in an area computation is where the count extends over n line widths. The signal to noise clearly degrades as more points in the wings of the line are included, Clearly the count has to be limited in some way and this involves making use of any available knowledge of the line shape.For an assumed Lorentzian line shape the fraction (f) of the total area included in n line widths is known = (2/n) tan-' n] and A = S/f where S is the numerically integrated area. There is a trade off between the uncertainties in S and f which produces a broad optimum, for the minimum uncertainty in A, for a numerical integration over 4-7 line widths together with the best estimate of the actual number of line widths used. This in turn leads to A - == 0.5 [;] OA max. This is about three times better than would be achieved by counting over a range of 64 line widths to obtain 99% of the total area.Least Squares Fitting We can go further in imposing our apriori knowledge on the data by carrying out a full least square fit to a Lorentzian line shape. With this procedure we can expect a signal to noise for the area of A - = 0.8 [;] UA max502 QUANTITATIVE NMR Anal. Proc., Vol. 21 In order to achieve this uncertainty it is only necessary to fit the curve over a few line widths, provided that a statistically significant number of points is included, as the data in the wings contribute in an insignificant way to the uncertainty in the computed area. To improve on the above accuracy it is necessary to make use of the data in the dispersion component of the frequency spectrum, either by fitting it directly to the theoretical dispersion line shape or by obtaining the interpolated points in the absorption spectrum that are derivable solely from the dispersion component amplitudes (this can be achieved with negligible imprecision by zero filling).Either way an improvement of d2 in the area signal to noise is obtainable. Concluding Remarks Provided that steps are taken to minimise differential tipping angle effects and base line errors in the frequency spectrum, quantitative measurements can be made with an accuracy comparable to that of peak height measurements. Fitting procedures are to be preferred if the line shape can be assigned with confidence; in that event phase can be retained as a fitting parameter in the full utilisation of the real and imaginary parts of the Fourier spectrum. It is a pleasure to acknowledge stimulating conversations with Drs.G. Morris and R. Waigh. References 1. 2. 3. 4. Becker, E. D., Ferretti, J. A., and Gambhir, P. N. Anal. Chem.. 1979, 51, 1413-20. Weiss, G . M., and Ferretti, J. A,, J . M a p Reson., 1983, 55, 397. Martin, M. L., Delpuech, J . J., and Martin, G. J., “Practical NMR Spectroscopy,” Chapter 4, Heyden, Hoult, D. I . , and Richards, R. E., J . M a p . Reson., 1976, 34, 425. London, 1980. Analysis of Pharmaceuticals: Some Practical Applications of Quantitative Pulsed Fourier Transform IH NMR Spectroscopy Geoff Griffiths Central Analytical Laboratories (Chemical), The Wellcome Foundation Limited, Dartford, Kent, DA 1 5A H In order to function efficiently and effectively, chemical and pharmaceutical research, development and manufacturing require the support of chemical analysis.The objectives of analysis are varied as are the chemicals themselves. The latter encompass raw materials, synthetic intermediates, drugs, excipients and formulated products (e.g., tablets, injections, creams and syrups). Often, even the pharmaceutical containers come under the scrutiny of the chemical analyst. These materials are primarily organic chemicals and analysis is aimed at establishing or confirming the identity and purity of the bulk substance and the identity and amount of its impurities and additives. The stability of the material is also of understandable importance. NMR spectroscopy has been firmly established as a tool in structural analysis for almost two decades. On the other hand, its use in quantitative work has been much slower in establishing itself alongside the armoury of other techniques presently available.The familiar criticisms are that it is too expensive, difficult, insensitive, inaccurate and unreliable. Ten years ago, these criticisms were to a large extent valid, but today most manufacturers of NMR instruments can offer reliable, accurate and sensitive equipment for 1H and 13C observation, which is relatively simple to operate on a routine basis. Unfortunately, still the biggest objection is the high cost, although in terms of real spending power this has halved over the last 10 years. This entry fee for the serious contender is upwards of f60k for an iron magnet instrument and double that for a superconducting high field system. Applications In the examples that follow, it is hoped to provide some information as to why organisations involved in chemicals and pharmaceuticals are giving more than cursory consideration to this powerful and versatile quantitative technique.The examples are confined to proton measurements in our own laboratory using pulsed Fourier transform NMR spectrometry at 80 MHz. As yet they are unpublished and in this respect they are novel. However, the principles involved are now well established, both from the analytical and NMR standpoints.December, 1984 QUANTITATIVE N M R 503 The examples all involve the use of an internal standard and it will be profitable to compare the general analytical expressions for percentage purity for gas or liquid chromatography (1) and for NMR spectrometry (2).For chromatography: Area(s) Mass(R) Percent purity (R) Area(R) Mass(s) Relative response factor Percent purity (S) = - . . . . (1) For NMR spectrometry: MWS, N(R, Percent purity (S) = - Area(s) x x - x - x Percent purity (R) . . (2) Area(R) Mass(S) MW(R) N ( S ) where (S) and (R) refer to the sample and internal standard, respectively, MW represents the relative molecular mass and N the number of hydrogen atoms per molecule which contribute to the analyte signal. In chromatography, each chemical component contributes only one signal whereas in spectroscopy there may be several signals, one or or more of which may be chosen in the analysis as appropriate. The important point to emerge from the above equations is that in NMR the response factor is unity and is therefore omitted.Provided that certain conditions are met, each nucleus of the same isotope, irrespective of its particular environment within the same component or a different component molecule, will absorb exactly the same amount of radiofrequency energy during the magnetic resonance transition. This point is important because it means that the NMR determination is an absolute one in that it does not require the availability of a reference sample of the compound under analysis, even in the development stage of the analytical method. What is required, of course, is a reference sample of known purity of an appropriate internal standard. This, as is shown in the following examples, is often a much easier proposition. Acety lethy leneimine The first example is the purity determination of acetylethyleneimine (AEI).This material is used to inactivate the foot and mouth disease virus. It acts as a biological alkylating agent, primarily on the virus RNA nucleus, but leaves the morphology of the virus capsid essentially unchanged. The inactivated virus is subsequently used in the production of foot and mouth disease (F and MD) vaccine. Acetylethyleneimine Phthalide AEI is highly toxic and reactive. Specially purified reference samples are not readily obtained. lH NMR spectroscopy provides an absolute and reasonably specific method of analysis with the minimum of sample preparation. Phthalide was selected as an internal standard because it gives a convenient reference signal at 5.3 p.p.m. and can be bought as a high purity chemical.Weighed amounts of AEI and phthalide are dissolved in deuterochloroform. The mixture is transferred to an NMR sample tube, together with a small amount (0.5 mg) of chromium(II1) acetylacetonate, Cr(acac),, which acts as a shiftless paramagnetic relaxation agent and examined directly. The Cr( acac), serves to reduce the pulse repetition time to 10 s and the total spectrum acquisition time to about 10 min. The NMR spectrum of the system is given in Fig. 1. Accurate measurement of the area of the signals from all seven hydrogens of AEI at about 2 p.p.m. and the signa1 from the methylene hydrogens of the phthalide at about 5.3 p.p.m. yield the purity of the AEI sample by application of equation (2). In practice, the presence of certain known impurities adds to the complexity of the treatment of the measured areas and to the over-all analysis, but this does not seriously detract from the usefulness of the method.3- Phenuxy benzaldeh y de cyano hy drin The second example is not strictly a pharmaceutical one, but relates to the production of the504 QUANTITATIVE NMR I I Anal. Proc., Vol. 21 I I I I 1 8 6 4 2 Fig. 1 . b, p.p.m. NMR signal system for the determination of AEI. pyrethroid insecticide, Cypermethrin. It concerns the analysis and assay of the raw material, 3-phenoxybenzalde hyde cyanoh ydrin . 00 OH 0H3x3 ' 0 ' cH/ 'c CH:CC12 / o / /. CH CN I II CN 0 I 3-Phenoxybenzaldehyde cyanohydrin Cypermethri n Conventional analysis for these type of compounds is either by gas or liquid chromatography, but the stability of this cyanohydrin is such that it is not readily amenable to accurate analysis by these techniques.There is the added complication of the lack of a suitable reference sample. These are not problems for NMR spectroscopy and a highly satisfactory direct method of assay and analysis has been devised with use of benzyl alcohol as an internal standard. The NMR spectrum of the system is given in Fig. 2. 5 10 8 6 b, p.p.m. 4 2 Fig. 2. NMR signal system for the analysis of 3-phenoxybenzaldehyde Zyanohydrin. Peaks are: 1, 3-phenoxybenzaldehyde; 2,3-phenoxy- 3enzaldehyde cyanohydrin; 3, benzyl alcohol; 4, toluene; 5, triethyl- ammonium hydrogen sulphate. The cyanohydrin provides a specific signal from the methine at 5.3p.p.m., the area of which is measured against the methylene resonance from the benzyl alcohol at 4.5 p.p.m.Weighed amounts of sample and standard are dissolved directly in deuterochloroform solution. It is usually necessary to adjust the concentration and temperature so that the methylene signal is sufficiently resolved from the hydroxyl signal. It is not possible to add a relaxation agent as this broadens the methine signal, so unfortunately a pulse repetition rate of about 20 s is necessary and leads to a total spectrum acquisitionDecember, 1984 QUANTITATIVE NMR 505 time of about 45 min. The method also simultaneously determines the impurities 3-phenoxybenzal- dehyde and triethylammonium hydrogen sulphate using the aldehyde signal at 9.8 p.p.m. and the methyl triplet signal at 1.2 p.p.m., respectively.Toluene and several other trace impurities are determined by gas chromatography. It should be emphasised that accurate areas can be obtained only after setting the appropriate base line to zero. This is achieved on our spectrometer with a special integration package with Lagrange base line correction, SINTEG for short*. Simply, N ( N '< 8) base line data points are used to compute a Lagrange interpolation polynominal of the (N-1)th degree. This is then used to zero the spectrum. The ordinate value of each of these N data points is the mean of several adjacent points. A single command results in digital integration of an area defined between two cursor points and a printed output of this area. Glyceryl trinitrate A final example is the NMR assay of Angised tablets, which have glyceryl trinitrate (GTN) as the active ingredient and which are used in the treatment of angina.A variety of analytical methods exists for GTN (or nitroglycerine as it is sometimes known in forensic circles), but all require a reference sample of the substance for use as a standard. Usually this reference is either a solution or a dispersion on lactose, but, nevertheless, this itself has to be calibrated before use. With NMR spectroscopy the question of availability and calibration of a reference sample of GTN does not arise. CH 2.0.N 0 2 MeO, CH*O.N02 I I M e O o C O - O M e - Me0 CH2.O.NO2 Glyceryl trinitrate Methyl 3,4,5-trimethoxybenzoate (GTN) (MTMB) IH NMR signals from GTN occur at 5.5 p.p.m. (methine) and 4.8 p.p.m. (methylene).A suitable internal standard is methyl 3,4,5-trimethoxybenzoate (MTMB), which gives a reference signal at 3.9p.p.m. Each 120mg of Angised tablet contains only 0.5mg of GTN. Prior to NMR analysis, it is necessary to take about 50 powdered tablets, add the MTMB and extract into deuterochloroform from an aqueous phase. The volume of the extract is then reduced to about 0.5ml. Cr(acac)3, 0.3mg, is added as a relaxation agent and this allows a fast pulse repetition rate of 6 s. The NMR signal system is shown in Fig. 3. I : A : B : C : D : I I I 8 6 4 2 6, p.p.m. Fig. 3. tablets . 'H NMR signal system for the assay of Angised At 80 MHz the presence of spinning side-bands and 13C satellites does introduce small errors, up to about 2%. However, a simple expression can be calculated that corrects for this provided that all four areas, A, B, C and D, (Fig.3) are separately measured. * The software package SIN.rEG was produced by Bruker Spectrospin Ltd., in 1977, for the Bruker WP-80 instrument thanks to the ideas and efforts of Mr. A. G. Ferrige, Wellcome Research Laboratories, Beckenham.506 IMAGE ANALYSIS FOR MEASUREMENT OF PARTICLE SIZE Anal. Proc., Vol. 21 The method has also been successfully applied to a similar product, Cardilate Tablets, which has erythrityl tetranitrate (Em) and not GTN as active ingredient. Raw material actives and pre-tabletted granules have also been analysed by this technique. Currently, the method is being used to provide back-up support to the stability testing by HPLC of a particular formulation of Angised tablets.Summary It is hoped that the above examples have shown that quantitative NMR spectrometry has a strong role to play in chemical and pharmaceutical analysis. Some advantages and uses are summarised in Table I. TABLE I QUANTITATIVE 1H NMR Ad vantages Uses No reference sample required Minimum sample handlinglpreparation Good accuracy Good specificity Short method development time General applicability Analysis of potentially hazardous substances Analysis of potentially unstable substances Reference method for calibration of other techniques Supportive method for less specific techniques Fast and accurate analysis The Future The future for the technique seems particularly healthy. Quantitative 13C NMR analysis is already used in a variety of applications, and quantitative work with many other nuclei, in both liquid and solid state, will develop and grow. However, for several reasons, 1H NMR will certainly continue to dominate the quantitative field for many more years. NMR spectroscopy has expanded at a colossal rate over the last 10 years, largely as a result of major scientific advances in magnet and computer technologies. It is sobering to reflect that the above examples were tackled with technology that is now several years out of date. Today, superconducting magnet systems for routine applications give a five-fold improvement in resolution and a ten-fold improvement in sensitivity. These improvements are of obvious benefit in analysis and, together with sophisticated auto-sampling accessories and the capability of almost fully automated spectrometer operation, should considerably widen the horizons of quantitative NMR in chemical and pharmaceut- ical analysis. I gratefully acknowledge the efforts of Dr. G. A. Stewart and Mr. A. C. Caws, who have resolutely supported the concept and practice of quantitative NMR in our analytical laboratories at Wellcome, Dartford.
ISSN:0144-557X
DOI:10.1039/AP9842100500
出版商:RSC
年代:1984
数据来源: RSC
|
10. |
Application of image analysis to the measurement of particle size and shape. Use of automatic image analysis in the assessment of particle and grain size distributions |
|
Analytical Proceedings,
Volume 21,
Issue 12,
1984,
Page 506-508
Brian Ralph,
Preview
|
PDF (234KB)
|
|
摘要:
506 IMAGE ANALYSIS FOR MEASUREMENT OF PARTICLE SIZE Anal. Proc., Vol. 21 Application of Image Analysis to the Measurement of Particle Size and Shape The following is a summary of one of the papers presented at a Meeting of the Particle Size Analysis Group held on June 13th, 1984, at the Health and Safety Executive Occupational Medicine and Hygiene Laboratories, London, N.W.2. Use of Automatic Image Analysis in the Assessment of Particle and Grain Size Distributions Brian Ralph Department of Metallurgy and Materials Science, University College. Cardiff, C R 1 TA In a very broad spectrum of the life, earth and physical sciences the need to quantify microstructures or fine particulate matter is of very considerable consequence. Obtaining such data can prove extremely tedious, and where sufficient contrast can be obtained real benefit accrues from adopting an automatic technique.Where the contrast between the features to be detected is too low or rather complex, anDecember, I984 IMAGE ANALYSIS FOR MEASUREMENT OF PARTICLE SIZE 507 automatic processing of the data that is entered manually may still be of considerable advantage. For those instances where the contrast is higher andlor simpler a fully automatic technique may be adopted whereby the features are both detected and sized automatically. Instrumentation for performing such analyses is now very widely available. Thanks to the reducing cost of computer hardware, such instrumentation is relatively much more sophisticated (and “user friendly”) and cheap compared with just a few years ago.Experimental Very considerable care is needed in setting up and executing an experiment designed to quantify particle size. Consideration has to be given to the sampling procedure and to the choice of a microscopical routine, which has sufficient resolution and contrast. The means by which the size data are acquired and sorted also warrants careful attention. In general, mean size data is relatively easily obtained, whereas some indication of the shape of the size distribution requires the sizes of many more features to be measured. Clearly, if the sizes or size distributions of different or related samples are to be compared then statistical tests will be very important. For many purposes, for instance comparing populations of dispersed particulate matter, there is no need to proceed past the point where data collected two dimensionally is compared statistically.All of the current generation of automatic image analysers contain a microprocessor and come with a software package which allows this comparison to be made very simply. Usually, these machines also allow the features to be sized or graded on the basis of a number of different measurements, such as area or maximum Feret diameter. There are, however, a number of instances in the life, earth and materials sciences where a convolution into the third dimension is required. This involves using some aspects of the science now usually referred to as stereology.1-4 In many instances this stereological transformation is easily performed and only limited numbers of two-dimensional measurements are needed.Here the obvious example is obtaining the volume fraction of a phase within a microstructure directly from the point fraction obtained from a two dimensional section. By contrast, the means of convoluting a two dimensional size distribution into three dimensions is much more complex and requires either a knowledge of the shape of the features under investigation or a model of this shape.5.6 The process of performing this two- to three-dimensional convolution degrades the data and so is to be avoided where comparisons of two-dimensional data will suffice. Further, in general very large amounts of two-dimensional data are required in order to produce three-dimensional size distributions with sufficient statistical confidence for a comparison to be made.Notwithstanding these difficulties and limitations much use has been made of this stereological approach. An example cited here will perhaps illustrate the over-all value of this approach. A detailed study has been made of grain growth in samples of polycrystalline aluminium containing a range of dispersions of fine alumina particles and with a range of initial grain size distributions.’-9 Metallic alloys are normally used in a polycrystalline form and the ability to modify and control the grain size distribution is of major technological interest. The boundaries between the grains are associated with an energy and thus holding samples at elevated temperatures tends to lead to a reduction in the total grain boundary area via grain boundary migration.10 A number of theories of grain growth have been proposedll-l3 and the experimental programme, which involved extensive use of automatic image analysis, amongst other techniques, was designed to test the applicability of these theories. In addition there is a very considerable interest in the means by which migrating grain boundaries overcome the pinning forces due to fine dispersions of particles. Results Transmission electron microscopy has been used to investigate the interactions between migrating grain boundaries and individual alumina particles.8 It has been found that the degree of pinning depends on the boundary crystallographic parameters.8 This fundamental study of the fine scale interaction between particles and boundaries has then been linked with a detailed quantitative evaluation of grain size distributions upon grain growth.’ The experimental techniques and analytical philosophy adopted in this study have been described in detail.9 In that paper the method of selecting and preparing samples is described, together with details of the automatic data collection routine adopted.Statistical tests are described which allow a comparison of the two-dimensional and three-dimensional grain size distributions. Details of the stereological routines adopted are also given.9508 EQUIPMENT NEWS Anal. Proc., Vol. 21 This over-all study has established several important factors governing grain growth.7-9 In all instances shrinkage of the smallest grains was detected in accordance with recent ideas propounded by Gladman.13 Normal grain growth (where the grain size distribution remains the same shape but shifts to larger sizes) was observed in most instances and evidence in support of a limiting grain size largely matching the theoretical predictions of Hillert” and Gladman12 obtained.This limiting grain size might be seen to be where there is a balance between the driving pressure to reduce grain boundary area and the pinning forces from the alumina particle dispersion. Anomalous grain growth (where a limited fraction of the grains grow very much larger and thus the shape of the grain size distribution is radically altered) has also been established in those samples which contained the largest volume fraction of alumina. It is suggested that this anomalous grain growth occurs where normal growth is inhibited and where the grain size distribution is initially rather broad.Discussion Only a brief outline of a study of grain growth has been given here, in a form which also gives some of the key steps that are used rather generally in assessing particle and grain size distributions. In addition, the early part of this presentation introduces a number of the key stereological references which may prove to be of value to those unfamiliar with this rather specialist literature. The author is grateful to the Science and Engineering Research Council and the RisQ National Laboratory, Denmark, for financial support. 1. 2. 3. 4. 5 . 6. 7. 8. 9. 10. 11. 12. 13. References DeHoff, R. T. , and Rhines, F. N., “Quantitative Metallography,” McGraw-Hill, New York, 1968. Underwood, E. E., “Quantitative Stereology,” Addison-Wesley, Reading, MA, USA, 1970. Weibel, E. R., Meek, G., Ralph, B . , Echlin, P., and Ross, R., “Stereology 3,” Blackwells. Oxford, 1972. Weibel, E. R., “Stereological Methods,” Academic Press, London, 1979. Saltykov, S. A., in Elias, H., Editor, “Stereology,” Springer, New York, 1967. p. 163. Exner, H. E., Int. Met. Rev. , 1972, 17, 25. Tweed, C. J., Hansen, N., and Ralph, B . , Metall. Trans., 1983, 14A, 2235. Tweed, C. J., Ralph, B., and Hansen, N., Acta Metall., 1984, in the press. Tweed, C. J., Hansen, N., and Ralph, B., Metallography. 1984, in the press. Grant, E., Porter, A. J.. and Ralph, B . , J . Muter. Sci., 1984, 19, in the press. Hillert, M., Acta Metall., 1965, 13, 227. Gladman, T., Proc. R . SOC. London, 1966. 294A. 298. Gladman, T., in Hansen, N., Jones, A. R., and Leffers, T., Editors, “Proceedings of the 1st Riso International Symposium on Recrystallisation,” Riso Press, Roskilde, Denmark, 1980, p. 183.
ISSN:0144-557X
DOI:10.1039/AP9842100506
出版商:RSC
年代:1984
数据来源: RSC
|
|