Development and Implementation of Fully 3D Statistical Image Reconstruction Algorithms for Helical CT and Half-ring PET Insert System


Book Description

X-ray computed tomography (CT) and positron emission tomography (PET) have become widely used imaging modalities for screening, diagnosis, and image-guided treatment planning. Along with the increased clinical use are increased demands for high image quality with reduced ionizing radiation dose to the patient. Despite their significantly high computational cost, statistical iterative reconstruction algorithms are known to reconstruct high-quality images from noisy tomographic datasets. The overall goal of this work is to design statistical reconstruction software for clinical x-ray CT scanners, and for a novel PET system that utilizes high-resolution detectors within the field of view of a whole-body PET scanner. The complex choices involved in the development and implementation of image reconstruction algorithms are fundamentally linked to the ways in which the data is acquired, and they require detailed knowledge of the various sources of signal degradation. Both of the imaging modalities investigated in this work have their own set of challenges. However, by utilizing an underlying statistical model for the measured data, we are able to use a common framework for this class of tomographic problems. We first present the details of a new fully 3D regularized statistical reconstruction algorithm for multislice helical CT. To reduce the computation time, the algorithm was carefully parallelized by identifying and taking advantage of the specific symmetry found in helical CT. Some basic image quality measures were evaluated using measured phantom and clinical datasets, and they indicate that our algorithm achieves comparable or superior performance over the fast analytical methods considered in this work. Next, we present our fully 3D reconstruction efforts for a high-resolution half-ring PET insert. We found that this unusual geometry requires extensive redevelopment of existing reconstruction methods in PET. We redesigned the major components of the data modeling process and incorporated them into our reconstruction algorithms. The algorithms were tested using simulated Monte Carlo data and phantom data acquired by a PET insert prototype system. Overall, we have developed new, computationally efficient methods to perform fully 3D statistical reconstructions on clinically-sized datasets.




3D Image Reconstruction for CT and PET


Book Description

This is a practical guide to tomographic image reconstruction with projection data, with strong focus on Computed Tomography (CT) and Positron Emission Tomography (PET). Classic methods such as FBP, ART, SIRT, MLEM and OSEM are presented with modern and compact notation, with the main goal of guiding the reader from the comprehension of the mathematical background through a fast-route to real practice and computer implementation of the algorithms. Accompanied by example data sets, real ready-to-run Python toolsets and scripts and an overview the latest research in the field, this guide will be invaluable for graduate students and early-career researchers and scientists in medical physics and biomedical engineering who are beginners in the field of image reconstruction. A top-down guide from theory to practical implementation of PET and CT reconstruction methods, without sacrificing the rigor of mathematical background Accompanied by Python source code snippets, suggested exercises, and supplementary ready-to-run examples for readers to download from the CRC Press website Ideal for those willing to move their first steps on the real practice of image reconstruction, with modern scientific programming language and toolsets Daniele Panetta is a researcher at the Institute of Clinical Physiology of the Italian National Research Council (CNR-IFC) in Pisa. He earned his MSc degree in Physics in 2004 and specialisation diploma in Health Physics in 2008, both at the University of Pisa. From 2005 to 2007, he worked at the Department of Physics "E. Fermi" of the University of Pisa in the field of tomographic image reconstruction for small animal imaging micro-CT instrumentation. His current research at CNR-IFC has as its goal the identification of novel PET/CT imaging biomarkers for cardiovascular and metabolic diseases. In the field micro-CT imaging, his interests cover applications of three-dimensional morphometry of biosamples and scaffolds for regenerative medicine. He acts as reviewer for scientific journals in the field of Medical Imaging: Physics in Medicine and Biology, Medical Physics, Physica Medica, and others. Since 2012, he is adjunct professor in Medical Physics at the University of Pisa. Niccolò Camarlinghi is a researcher at the University of Pisa. He obtained his MSc in Physics in 2007 and his PhD in Applied Physics in 2012. He has been working in the field of Medical Physics since 2008 and his main research fields are medical image analysis and image reconstruction. He is involved in the development of clinical, pre-clinical PET and hadron therapy monitoring scanners. At the time of writing this book he was a lecturer at University of Pisa, teaching courses of life-sciences and medical physics laboratory. He regularly acts as a referee for the following journals: Medical Physics, Physics in Medicine and Biology, Transactions on Medical Imaging, Computers in Biology and Medicine, Physica Medica, EURASIP Journal on Image and Video Processing, Journal of Biomedical and Health Informatics.




Statistical Modeling and Path-based Iterative Reconstruction for X-ray Computed Tomography


Book Description

X-ray computed tomography (CT) and tomosynthesis systems have proven to be indispensable components in medical diagnosis and treatment. My research is to develop advanced image reconstruction and processing algorithms for the CT and tomosynthesis systems. Streak artifacts caused by metal objects such as dental fillings, surgical instruments, and orthopedic hardware may obscure important diagnostic information in X-ray computed tomography (CT) images. To improve the image quality, we proposed to complete the missing kilovoltage (kV) projection data with selectively acquired megavoltage (MV) data that do not suffer from photon starvation. We developed two statistical image reconstruction methods, dual-energy penalized weighted least squares and polychromatic maximum likelihood, for combining kV and selective MV data. Cramer-Rao Lower Bound for Compound Poisson was studied to revise the statistical model and minimize radiation dose. Numerical simulations and phantom studies have shown that the combined kV/MV imaging systems enable a better delineation of structures of interest in CT images for patients with metal objects. The x-ray tube on the CT system produces a wide x-ray spectrum. Polychromatic statistical CT reconstruction is desired for more accurate quantitative measurement of the chemical composition and density of the tissue. Polychromatic statistical reconstruction algorithms usually have very high computational demands due to complicated optimization frameworks and the large number of spectrum bins. We proposed a spectrum information compression method and a new optimization framework to significantly reduce the computational cost in reconstructions. The new algorithm applies to multi-material beam hardening correction, adaptive exposure control, and spectral imaging. Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but is difficult to choose. We have developed two path seeking algorithms that are capable of generating a path of MBIR images with different strengths of the penalty function. The errors of the proposed path seeking algorithms are reasonably small throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggested a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model. Additionally, we have developed a convolution-based blur-and-add model for digital tomosynthesis systems that can be used in efficient system analysis, task-dependent optimization, and filter design. We also proposed a computationally practical algorithm to simulate and subtract out-of-plane artifacts in tomosynthesis images using patient-specific prior CT volumes.







Programs for Evaluation of 3D PET Reconstruction Algorithms


Book Description

Evaluation of a reconstruction algorithm should be done using a sample set that is large enough to provide us with a statistically significant result. In order to carry out an evaluation, one possibility is to use a set of computer simulated phantoms, that takes into account some parameter variabilities. This technical report describes in detail programs that generate a set of 3D phantoms and projection data, reconstruct, evaluate and then compare. The main characteristics are: (1) Phantom and projection data generation, (a) Phantoms with many (69) ellipsoid features, ranging from small (4 mm) to large (40 mm) sized features; (b) Phantoms are random samples from a statistically described ensemble of 3D images resembling those to which PET would be applied in a medical situation (features with random size, orientation and activity); (c) Features are inside spheres that provide background value for some important clinical tasks such as detectability; (d) Types of features: hot, cold and normal spots, and (e) Emulation of 3D PET scanner for projection data generation, with detector field of view (FOV) blurring and a realistic 3D PET noise model. (2) Reconstruction algorithms: (a) Algebraic Reconstruction Technique using blob (ARTblob) as basis function; (b) ART using voxels (ARTvox); (c) EM-ML using blobs (EMblob), and (d) EM-ML using voxels (EMvox). (3) Evaluation for following tasks: (a) Training figure of merit (FOM); (b) Structural accuracy; (c) Hot spot detectability, and (d) Cold spot detectability. (4) Statistical comparison using paired t-test. Justifications for using some models are described in the paper and an application evaluating some reconstruction methods is reported.




Evaluation of State-of-the-Art Hardware Architectures for Fast Cone-Beam CT Reconstruction


Book Description

Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.










Residual Correction Algorithms for Statistical Image Reconstruction in Positron Emission Tomography


Book Description

Positron emission tomography (PET) is a radionuclide imaging modality that plays important roles in visualizing, targeting, and quantifying functional processes in vivo. High-resolution and quantitative PET images are reconstructed by solving large-scale inverse problems with iterative methods that incorporate accurate physics and noise modeling of the imaging process. The computation demands of PET image reconstruction are rapidly increasing as higher-resolution detectors, larger imaging field-of-view, and dynamic or adaptive data acquisition modes are being adopted by modern PET scanners. The trend of the increase in the computation demands is even faster than Moore's law that describes the exponential growth in the number of transistors placed on an integrated circuit. In this project a residual correction mechanism is introduced to PET image reconstruction to create computationally efficient yet accurate tomographic reconstruction algorithms. By using residual correction, reconstruction methods are able to adopt a more simplified physical model for fast computation while retaining the accuracy of the final solution. Residual correction can accelerate existing image reconstruction packages. It allows iterative reconstruction with more accurate physical models which are currently impractical due to the high computation cost. Two illustrative applications of the residual correction approach are provided. One is image reconstruction with an object-dependent Monte Carlo based physics model. The other is image reconstruction using an ultra fast GPU-accelerated simplified geometric model.