CRC - Correction Techniques in Emission Tomography

Organization: CRC
Publication Date: 27 April 2012
Page Count: 301


In the past 40 years the field of tomography has evolved from simple X-ray projection imaging, autoradiography, fluorescent imaging and magnetic resonance imaging to production of the three dimensional images of the accumulation of chemical tracers injected into animals, plants and human subjects. The first major breakthrough technology since Röntgen's discovery of X-ray projection imaging was X-Ray computer-assisted tomography known in the 1970s as CAT scanning or just CT. The method allowed one to compute the internal distribution of tissue characteristics and injected contrast material from the amount of X-Rays (photons) transmitted through an object from multiple angles. Though analog methods were available to accomplish the representation of what is inside an object from projections by merely back projecting the beams onto photographic material, the digital computer-enabled X-Ray computer assisted tomography to revolutionize medical radiography. X-rays aimed in multiple directions through an object (patient) allowed measurements that could be used to calculate the most likely distribution of tissue parameters that would result in the data observed by detectors positioned at multiple angles around the object.

The mathematical problem is as follows. Given the number of X-Ray photons entering the object in a particular direction and given the number of photons that were able to exit the object, estimate the amount of material that attenuated the photons and do this estimation for many paths through the object. The problem is generally known as the inverse problem wherein an unknown is calculated from known observations that represent or are a transformation of the unknown. Mathematically this turns out to be a simple manipulation because the ratio between the number of photons from the X-Ray generator that enter the body at a particular angle to the number of photons exiting the body along a line from the X-ray generator to that detector can be converted to the sum of the individual pieces of tissue between the X-ray input and the detected output for a particular line through the object. The conversion was merely the logarithm of this ratio.

By accumulating these ratios from many projections of X-rays, one can answer the question: What is the most likely distribution of tissue attenuation that can give the observed results? The X-Ray CAT scan is the 3D distribution of attenuation which is generally equivalent to the distribution of electron density.

So if this problem of tomography was solved 40 years ago with the main progress being made in perfection of computer methods to achieve and display the information as well as methods to improve the collection of information, why this book? Indeed, even progress in removal of artifacts from motion such as respiration and heart motion and changes in photon energy spectra within the object have been dealt with, so why not use the X-Ray computed tomography methods for imaging injected radionuclides, radiopharmaceuticals and fluorescent molecules? The most concise answer is that emission tomography has four unknowns for a six-parameter problem but transmission tomography has only one unknown for a five-parameter problem.

In X-ray computed tomography the mathematical problem is to compute the amount of attenuation given the known input X-ray photons and the known number of output photons detected from the object at every angle or projection. But, in emission tomography the problem is to compute the distribution of photon sources inside the body without knowing the intensity of the sources, nor their position, nor the amount that is attenuated and the contamination of the detected photons from scattering elsewhere in the object. What is known is the amount of photons that get out of the object along a particular trajectory (see Table 0.1). Thus we must estimate the source strengths, their position, and the attenuation of the emitted photons as well as how many of the photons detected were scattered from multiple sources. To a mathematician this is an intractable problem (cf. Phys. Med Biol 19:387-389, 1974); nevertheless it is solved by methods discussed in this book along with methods to compensate for motion, partial volume, registration and other factors that influence detector performance.

It is refreshing to have a text on emission molecular imaging relevant to animals and human beings with an emphasis on those factors that detract from resolution and quantification. This book implicitly distinguishes between molecular imaging of emitters and molecular imaging provided by magnetic resonance techniques such as magnetic resonance spectroscopy, magnetic resonance imaging of hyperpolarized and other contrast agents, and other magnetic resonance methods wherein the response to the injected pattern of the radiofrequency field is measured. The mathematics of image reconstruction of an intrinsic emitter that is not stimulated by a known external probing signal are more complicated for light and gamma ray photon emitter reconstruction problems. The exception to this statement is functional molecular tomography (FMT) wherein the behavior of the stimulating photons and that of the excited photons from injected flurophors need to be incorporated in the reconstruction strategy (Chapter 12). Yet, the benefits of emission tomography for molecular imaging of radionuclide emissions relative to other modalities lie in the exquisite sensitivity of radionuclide detection and the broad scales in time, space and object size served by SPECT and PET techniques as well as their role in hypbid imaging systems that employ a combination of emission with CT, MRI and FMT.

In sum, this book shows how researchers have overcome limitations in emission tomography noted 40 years ago and have brought the methods to the goal of high spatial resolution and quantification. Most importantly, these advances have enabled clinically useful applications not available to other diagnostic methods.