Statistical Modeling and Path-based Iterative Reconstruction for X-ray Computed Tomography


Book Description

X-ray computed tomography (CT) and tomosynthesis systems have proven to be indispensable components in medical diagnosis and treatment. My research is to develop advanced image reconstruction and processing algorithms for the CT and tomosynthesis systems. Streak artifacts caused by metal objects such as dental fillings, surgical instruments, and orthopedic hardware may obscure important diagnostic information in X-ray computed tomography (CT) images. To improve the image quality, we proposed to complete the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. We developed two statistical image reconstruction methods, dual-energy penalized weighted least squares and polychromatic maximum likelihood, for combining kV and selective MV data. Cramer-Rao Lower Bound for Compound Poisson was studied to revise the statistical model and minimize radiation dose. Numerical simulations and phantom studies have shown that the combined kV/MV imaging systems enable a better delineation of structures of interest in CT images for patients with metal objects. The x-ray tube on the CT system produces a wide x-ray spectrum. Polychromatic statistical CT reconstruction is desired for more accurate quantitative measurement of the chemical composition and density of the tissue. Polychromatic statistical reconstruction algorithms usually have very high computational demands due to complicated optimization frameworks and the large number of spectrum bins. We proposed a spectrum information compression method and a new optimization framework to significantly reduce the computational cost in reconstructions. The new algorithm applies to multi-material beam hardening correction, adaptive exposure control, and spectral imaging. Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but is difficult to choose. We have developed two path seeking algorithms that are capable of generating a path of MBIR images with different strengths of the penalty function. The errors of the proposed path seeking algorithms are reasonably small throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggested a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model. Additionally, we have developed a convolution-based blur-and-add model for digital tomosynthesis systems that can be used in efficient system analysis, task-dependent optimization, and filter design. We also proposed a computationally practical algorithm to simulate and subtract out-of-plane artifacts in tomosynthesis images using patient-specific prior CT volumes.




Statistical Iterative Reconstruction and Dose Reduction in Multi-Slice Computed Tomography


Book Description

Computed tomography is one of the most important imaging methods in medical technology. Although computed tomography examinations only make up a small proportion of X-ray examinations, they do make a great contribution to civilizing radiation exposure of the population. By using statistical iterative reconstruction methods, it is possible to reduce the mean radiation dose per examination. While statistical iterative reconstruction methods enable the modeling of physical imaging properties, the user can decide freely and independently about the choice of numerous free parameters. However, every parameterization decision has an influence on the final image quality. In this work, inter alia the definition of the modeling of the forward projection is examined as well as the influence of statistical weights and data redundancies in interaction with various iterative reconstruction techniques. Several extensive studies were put together, which challenge these different combinations in every respect and push the models to their limits. Image quality was assessed using the following quantitative metrics: basic metrics and task-based metrics. The investigation shows that the definition of iterative reconstruction parameters is not always trivial and must always be understood comprehensively to obtain an optimal image quality. Finally, a novel reconstruction algorithm, called FINESSE, is presented, which improves some of the weaknesses of other reconstruction techniques.













Fundamentals of Computerized Tomography


Book Description

This revised and updated second edition – now with two new chapters - is the only book to give a comprehensive overview of computer algorithms for image reconstruction. It covers the fundamentals of computerized tomography, including all the computational and mathematical procedures underlying data collection, image reconstruction and image display. Among the new topics covered are: spiral CT, fully 3D positron emission tomography, the linogram mode of backprojection, and state of the art 3D imaging results. It also includes two new chapters on comparative statistical evaluation of the 2D reconstruction algorithms and alternative approaches to image reconstruction.




Statistical Image Reconstruction for Quantitative Computed Tomography


Book Description

Statistical iterative reconstruction (SIR) algorithms for x-ray computed tomography (CT) have the potential to reconstruct images with less noise and systematic error than the conventional filtered backprojection (FBP) algorithm. More accurate reconstruction algorithms are important for reducing imaging dose and for a wide range of quantitative CT applications. The work presented herein investigates some potential advantages of one such statistically motivated algorithm called Alternating Minimization (AM). A simulation study is used to compare the tradeoff between noise and resolution in images reconstructed with the AM and FBP algorithms. The AM algorithm is employed with an edge-preserving penalty function, which is shown to result in images with contrast-dependent resolution. The AM algorithm always reconstructed images with less image noise than the FBP algorithm. Compared to previous studies in the literature, this is the first work to clearly illustrate that the reported noise advantage when using edge-preserving penalty functions can be highly dependent on the contrast of the object used for quantifying resolution. A polyenergetic version of the AM algorithm, which incorporates knowledge of the scanner's x-ray spectrum, is then commissioned from data acquired on a commercially available CT scanner. Homogeneous cylinders are used to assess the absolute accuracy of the polyenergetic AM algorithm and to compare systematic errors to conventional FBP reconstruction. Methods to estimate the x-ray spectrum, model the bowtie filter and measure scattered radiation are outlined which support AM reconstruction to within 0.5% of the expected ground truth. The polyenergetic AM algorithm reconstructs the cylinders with less systematic error than FBP, in terms of better image uniformity and less object-size dependence. Finally, the accuracy of a post-processing dual-energy CT (pDECT) method to non-invasively measure a material's photon cross-section information is investigated. Data is acquired on a commercial scanner for materials of known composition. Since the pDECT method has been shown to be highly sensitive to reconstructed image errors, both FBP and polyenergetic AM reconstruction are employed. Linear attenuation coefficients are estimated with residual errors of around 1% for energies of 30 keV to 1 MeV with errors rising to 3%-6% at lower energies down to 10 keV. In the ideal phantom geometry used here, the main advantage of AM reconstruction is less random cross-section uncertainty due to the improved noise performance.




3D Image Reconstruction for CT and PET


Book Description

This is a practical guide to tomographic image reconstruction with projection data, with strong focus on Computed Tomography (CT) and Positron Emission Tomography (PET). Classic methods such as FBP, ART, SIRT, MLEM and OSEM are presented with modern and compact notation, with the main goal of guiding the reader from the comprehension of the mathematical background through a fast-route to real practice and computer implementation of the algorithms. Accompanied by example data sets, real ready-to-run Python toolsets and scripts and an overview the latest research in the field, this guide will be invaluable for graduate students and early-career researchers and scientists in medical physics and biomedical engineering who are beginners in the field of image reconstruction. A top-down guide from theory to practical implementation of PET and CT reconstruction methods, without sacrificing the rigor of mathematical background Accompanied by Python source code snippets, suggested exercises, and supplementary ready-to-run examples for readers to download from the CRC Press website Ideal for those willing to move their first steps on the real practice of image reconstruction, with modern scientific programming language and toolsets Daniele Panetta is a researcher at the Institute of Clinical Physiology of the Italian National Research Council (CNR-IFC) in Pisa. He earned his MSc degree in Physics in 2004 and specialisation diploma in Health Physics in 2008, both at the University of Pisa. From 2005 to 2007, he worked at the Department of Physics "E. Fermi" of the University of Pisa in the field of tomographic image reconstruction for small animal imaging micro-CT instrumentation. His current research at CNR-IFC has as its goal the identification of novel PET/CT imaging biomarkers for cardiovascular and metabolic diseases. In the field micro-CT imaging, his interests cover applications of three-dimensional morphometry of biosamples and scaffolds for regenerative medicine. He acts as reviewer for scientific journals in the field of Medical Imaging: Physics in Medicine and Biology, Medical Physics, Physica Medica, and others. Since 2012, he is adjunct professor in Medical Physics at the University of Pisa. Niccolò Camarlinghi is a researcher at the University of Pisa. He obtained his MSc in Physics in 2007 and his PhD in Applied Physics in 2012. He has been working in the field of Medical Physics since 2008 and his main research fields are medical image analysis and image reconstruction. He is involved in the development of clinical, pre-clinical PET and hadron therapy monitoring scanners. At the time of writing this book he was a lecturer at University of Pisa, teaching courses of life-sciences and medical physics laboratory. He regularly acts as a referee for the following journals: Medical Physics, Physics in Medicine and Biology, Transactions on Medical Imaging, Computers in Biology and Medicine, Physica Medica, EURASIP Journal on Image and Video Processing, Journal of Biomedical and Health Informatics.




Modeling and Development of Iterative Reconstruction Algorithms in Emerging X-ray Imaging Technologies


Book Description

Many new promising X-ray-based biomedical imaging technologies have emerged over the last two decades. Five different novel X-ray based imaging technologies are discussed in this dissertation: differential phase-contrast tomography (DPCT), grating-based phase-contrast tomography (GB-PCT), spectral-CT (K-edge imaging), cone-beam computed tomography (CBCT), and in-line X-ray phase contrast (XPC) tomosynthesis. For each imaging modality, one or more specific problems prevent them being effectively or efficiently employed in clinical applications have been discussed. Firstly, to mitigate the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods in DPCT, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction. Secondly, to improve image quality in grating-based phase-contrast tomography, we incorporate 2nd order statistical properties of the object property sinograms, including correlations between them, into the formulation of an advanced multi-channel (MC) image reconstruction algorithm, which reconstructs three object properties simultaneously. We developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method to rapidly solve the MC reconstruction problem. Thirdly, to mitigate image artifacts that arise from reduced-view and/or noisy decomposed sinogram data in K-edge imaging, we exploited the inherent sparseness of typical K-edge objects and incorporated the statistical properties of the decomposed sinograms to formulate two penalized weighted least square problems with a total variation (TV) penalty and a weighted sum of a TV penalty and an l1-norm penalty with a wavelet sparsifying transform. We employed a fast iterative shrinkage/thresholding algorithm (FISTA) and splitting-based FISTA algorithm to solve these two PWLS problems. Fourthly, to enable advanced iterative algorithms to obtain better diagnostic images and accurate patient positioning information in image-guided radiation therapy for CBCT in a few minutes, two accelerated variants of the FISTA for PLS-based image reconstruction are proposed. The algorithm acceleration is obtained by replacing the original gradient-descent step by a sub-problem that is solved by use of the ordered subset concept (OS-SART). In addition, we also present efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units (GPUs). Finally, we employed our developed accelerated version of FISTA for dealing with the incomplete (and often noisy) data inherent to in-line XPC tomosynthesis which combines the concepts of tomosynthesis and in-line XPC imaging to utilize the advantages of both for biological imaging applications. We also investigate the depth resolution properties of XPC tomosynthesis and demonstrate that the z-resolution properties of XPC tomosynthesis is superior to that of conventional absorption-based tomosynthesis. To investigate all these proposed novel strategies and new algorithms in these different imaging modalities, we conducted computer simulation studies and real experimental data studies. The proposed reconstruction methods will facilitate the clinical or preclinical translation of these emerging imaging methods.




Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine


Book Description

This book contains a selection of communications presented at the Third International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, held 4-6 July 1995 at Domaine d' Aix-Marlioz, Aix-Ies-Bains, France. This nice resort provided an inspiring environment to hold discussions and presentations on new and developing issues. Roentgen discovered X-ray radiation in 1895 and Becquerel found natural radioactivity in 1896 : a hundred years later, this conference was focused on the applications of such radiations to explore the human body. If the physics is now fully understood, 3D imaging techniques based on ionising radiations are still progressing. These techniques include 3D Radiology, 3D X-ray Computed Tomography (3D-CT), Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET). Radiology is dedicated to morphological imaging, using transmitted radiations from an external X-ray source, and nuclear medicine to functional imaging, using radiations emitted from an internal radioactive tracer. In both cases, new 3D tomographic systems will tend to use 2D detectors in order to improve the radiation detection efficiency. Taking a set of 2D acquisitions around the patient, 3D acquisitions are obtained. Then, fully 3D image reconstruction algorithms are required to recover the 3D image of the body from these projection measurements.