Fundamentals of Computerized Tomography


Book Description

This revised and updated second edition – now with two new chapters - is the only book to give a comprehensive overview of computer algorithms for image reconstruction. It covers the fundamentals of computerized tomography, including all the computational and mathematical procedures underlying data collection, image reconstruction and image display. Among the new topics covered are: spiral CT, fully 3D positron emission tomography, the linogram mode of backprojection, and state of the art 3D imaging results. It also includes two new chapters on comparative statistical evaluation of the 2D reconstruction algorithms and alternative approaches to image reconstruction.




Statistical Image Reconstruction for Quantitative Computed Tomography


Book Description

Statistical iterative reconstruction (SIR) algorithms for x-ray computed tomography (CT) have the potential to reconstruct images with less noise and systematic error than the conventional filtered backprojection (FBP) algorithm. More accurate reconstruction algorithms are important for reducing imaging dose and for a wide range of quantitative CT applications. The work presented herein investigates some potential advantages of one such statistically motivated algorithm called Alternating Minimization (AM). A simulation study is used to compare the tradeoff between noise and resolution in images reconstructed with the AM and FBP algorithms. The AM algorithm is employed with an edge-preserving penalty function, which is shown to result in images with contrast-dependent resolution. The AM algorithm always reconstructed images with less image noise than the FBP algorithm. Compared to previous studies in the literature, this is the first work to clearly illustrate that the reported noise advantage when using edge-preserving penalty functions can be highly dependent on the contrast of the object used for quantifying resolution. A polyenergetic version of the AM algorithm, which incorporates knowledge of the scanner's x-ray spectrum, is then commissioned from data acquired on a commercially available CT scanner. Homogeneous cylinders are used to assess the absolute accuracy of the polyenergetic AM algorithm and to compare systematic errors to conventional FBP reconstruction. Methods to estimate the x-ray spectrum, model the bowtie filter and measure scattered radiation are outlined which support AM reconstruction to within 0.5% of the expected ground truth. The polyenergetic AM algorithm reconstructs the cylinders with less systematic error than FBP, in terms of better image uniformity and less object-size dependence. Finally, the accuracy of a post-processing dual-energy CT (pDECT) method to non-invasively measure a material's photon cross-section information is investigated. Data is acquired on a commercial scanner for materials of known composition. Since the pDECT method has been shown to be highly sensitive to reconstructed image errors, both FBP and polyenergetic AM reconstruction are employed. Linear attenuation coefficients are estimated with residual errors of around 1% for energies of 30 keV to 1 MeV with errors rising to 3%-6% at lower energies down to 10 keV. In the ideal phantom geometry used here, the main advantage of AM reconstruction is less random cross-section uncertainty due to the improved noise performance.




Image Reconstruction from Projections


Book Description

Image reconstruction from projections. Probability and random variables. An overview of the process of CT. Physical problems associated with data collection in CT. Computer simulation of data collection in CT. Data collection and reconstruction of the head phantom under various assumptions. Basic concepts of reconstruction algorithms. Backprojection. Convolution method for parallel beams. Other transform methods for parallel beams. Convolution methods for divergent beams. The algebraic reconstruction techniques. Quadratic optimization methods. Noniterative series expansion methods. Truly three-dimensional reconstruction. Three-dimensional display of organs. Mathematical background.




Statistical Modeling and Path-based Iterative Reconstruction for X-ray Computed Tomography


Book Description

X-ray computed tomography (CT) and tomosynthesis systems have proven to be indispensable components in medical diagnosis and treatment. My research is to develop advanced image reconstruction and processing algorithms for the CT and tomosynthesis systems. Streak artifacts caused by metal objects such as dental fillings, surgical instruments, and orthopedic hardware may obscure important diagnostic information in X-ray computed tomography (CT) images. To improve the image quality, we proposed to complete the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. We developed two statistical image reconstruction methods, dual-energy penalized weighted least squares and polychromatic maximum likelihood, for combining kV and selective MV data. Cramer-Rao Lower Bound for Compound Poisson was studied to revise the statistical model and minimize radiation dose. Numerical simulations and phantom studies have shown that the combined kV/MV imaging systems enable a better delineation of structures of interest in CT images for patients with metal objects. The x-ray tube on the CT system produces a wide x-ray spectrum. Polychromatic statistical CT reconstruction is desired for more accurate quantitative measurement of the chemical composition and density of the tissue. Polychromatic statistical reconstruction algorithms usually have very high computational demands due to complicated optimization frameworks and the large number of spectrum bins. We proposed a spectrum information compression method and a new optimization framework to significantly reduce the computational cost in reconstructions. The new algorithm applies to multi-material beam hardening correction, adaptive exposure control, and spectral imaging. Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but is difficult to choose. We have developed two path seeking algorithms that are capable of generating a path of MBIR images with different strengths of the penalty function. The errors of the proposed path seeking algorithms are reasonably small throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggested a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model. Additionally, we have developed a convolution-based blur-and-add model for digital tomosynthesis systems that can be used in efficient system analysis, task-dependent optimization, and filter design. We also proposed a computationally practical algorithm to simulate and subtract out-of-plane artifacts in tomosynthesis images using patient-specific prior CT volumes.




3D Image Reconstruction for CT and PET


Book Description

This is a practical guide to tomographic image reconstruction with projection data, with strong focus on Computed Tomography (CT) and Positron Emission Tomography (PET). Classic methods such as FBP, ART, SIRT, MLEM and OSEM are presented with modern and compact notation, with the main goal of guiding the reader from the comprehension of the mathematical background through a fast-route to real practice and computer implementation of the algorithms. Accompanied by example data sets, real ready-to-run Python toolsets and scripts and an overview the latest research in the field, this guide will be invaluable for graduate students and early-career researchers and scientists in medical physics and biomedical engineering who are beginners in the field of image reconstruction. A top-down guide from theory to practical implementation of PET and CT reconstruction methods, without sacrificing the rigor of mathematical background Accompanied by Python source code snippets, suggested exercises, and supplementary ready-to-run examples for readers to download from the CRC Press website Ideal for those willing to move their first steps on the real practice of image reconstruction, with modern scientific programming language and toolsets Daniele Panetta is a researcher at the Institute of Clinical Physiology of the Italian National Research Council (CNR-IFC) in Pisa. He earned his MSc degree in Physics in 2004 and specialisation diploma in Health Physics in 2008, both at the University of Pisa. From 2005 to 2007, he worked at the Department of Physics "E. Fermi" of the University of Pisa in the field of tomographic image reconstruction for small animal imaging micro-CT instrumentation. His current research at CNR-IFC has as its goal the identification of novel PET/CT imaging biomarkers for cardiovascular and metabolic diseases. In the field micro-CT imaging, his interests cover applications of three-dimensional morphometry of biosamples and scaffolds for regenerative medicine. He acts as reviewer for scientific journals in the field of Medical Imaging: Physics in Medicine and Biology, Medical Physics, Physica Medica, and others. Since 2012, he is adjunct professor in Medical Physics at the University of Pisa. Niccolò Camarlinghi is a researcher at the University of Pisa. He obtained his MSc in Physics in 2007 and his PhD in Applied Physics in 2012. He has been working in the field of Medical Physics since 2008 and his main research fields are medical image analysis and image reconstruction. He is involved in the development of clinical, pre-clinical PET and hadron therapy monitoring scanners. At the time of writing this book he was a lecturer at University of Pisa, teaching courses of life-sciences and medical physics laboratory. He regularly acts as a referee for the following journals: Medical Physics, Physics in Medicine and Biology, Transactions on Medical Imaging, Computers in Biology and Medicine, Physica Medica, EURASIP Journal on Image and Video Processing, Journal of Biomedical and Health Informatics.







Statistical Iterative Reconstruction and Dose Reduction in Multi-Slice Computed Tomography


Book Description

Computed tomography is one of the most important imaging methods in medical technology. Although computed tomography examinations only make up a small proportion of X-ray examinations, they do make a great contribution to civilizing radiation exposure of the population. By using statistical iterative reconstruction methods, it is possible to reduce the mean radiation dose per examination. While statistical iterative reconstruction methods enable the modeling of physical imaging properties, the user can decide freely and independently about the choice of numerous free parameters. However, every parameterization decision has an influence on the final image quality. In this work, inter alia the definition of the modeling of the forward projection is examined as well as the influence of statistical weights and data redundancies in interaction with various iterative reconstruction techniques. Several extensive studies were put together, which challenge these different combinations in every respect and push the models to their limits. Image quality was assessed using the following quantitative metrics: basic metrics and task-based metrics. The investigation shows that the definition of iterative reconstruction parameters is not always trivial and must always be understood comprehensively to obtain an optimal image quality. Finally, a novel reconstruction algorithm, called FINESSE, is presented, which improves some of the weaknesses of other reconstruction techniques.




Bayesian Iterative Reconstruction Methods for 3D X-ray Computed Tomography


Book Description

In industry, 3D X-ray Computed Tomography aims at virtually imaging a volume in order to inspect its interior. The virtual volume is obtained thanks to a reconstruction algorithm based on projections of X-rays sent through the industrial part to inspect. In order to compensate uncertainties in the projections such as scattering or beam-hardening, which are cause of many artifacts in conventional filtered backprojection methods, iterative reconstruction methods bring further information by enforcing a prior model on the volume to reconstruct, and actually enhance the reconstruction quality. In this context, this thesis proposes new iterative reconstruction methods for the inspection of aeronautical parts made by SAFRAN group. In order to alleviate the computational cost due to repeated projection and backprojection operations which model the acquisition process, iterative reconstruction methods can take benefit from the use of high-parallel computing on Graphical Processor Unit (GPU). In this thesis, the implementation on GPU of several pairs of projector and backprojector is detailed. In particular, a new GPU implementation of the matched Separable Footprint pair is proposed. Since many of SAFRAN's industrial parts are piecewise-constant volumes, a Gauss-Markov-Potts prior model is introduced, from which a joint reconstruction and segmentation algorithm is derived. This algorithm is based on a Bayesian approach which enables to explain the role of each parameter. The actual polychromacy of X-rays, which is responsible for scattering and beam-hardening, is taken into account by proposing an error-splitting forward model. Combined with Gauss-Markov-Potts prior on the volume, this new forward model is experimentally shown to bring more accuracy and robustness. At last, the estimation of the uncertainties on the reconstruction is investigated by variational Bayesian approach. In order to have a reasonable computation time, it is highlighted that the use of a matched pair of projector and backprojector is necessary.