Segmentation of Lesions from Breast Ultrasound Images Using Deep Convolutional Neural Network


Book Description

To diagnose breast cancer, currently, a radiologist uses a computer-aided diagnosis system which requires them to preselect a region of interest (ROI) as an input for analysis. Breast imaging reporting and data system (BI-RADS) is a standardized reporting process to categorize breast cancer, which is based on several features of the lesion. The BI-RADS scale is based on ultrasound images, which makes the quality of the diagnosis highly dependent on the physician's experience. To minimize human error, we propose solutions based on densely connected deep convolutional neural networks. This thesis discusses various networks based on the U-Net architecture, DenseNet, attention gates, and Mask R-CNN to do semantic segmentation of the lesions from the Breast Ultrasound (BUS) images. Firstly, regular convolution blocks are replaced by dense blocks inside the U-Net (U-DenseNet), to support the learning of intricate patterns of the BUS image which is usually noisy and contaminated with speckles. This resulted in a better performance comparing to the U-Net model, with an F-score of 0.63. Then, attention gates are used in conjunction with U-DenseNet (Attention U-DenseNet) to eliminate the requirement of an explicit localization module. This resulted in a much better improvement comparing to the U-DenseNet with an F-score of 0.75. Thirdly, the previously deduced architecture, Attention U-DenseNet is used as a backbone for the Mask R-CNN architecture, which achieves an F-score of 0.76. Finally, a per-image weighted binary cross-entropy loss function is employed, as the area of the region of interest is usually small.




Ultrasound Image Classification and Segmentation Using Deep Learning Applications


Book Description

Breast cancer is one of the most common diseases with a high mortality rate. Early detection and diagnosis using computer-aided methods is considered one of the most efficient ways to control the mortality rate. Different types of classical methods were applied to segment the region of interest from breast ultrasound images. In recent years, Deep learning (DL) based implementations achieved state-of-the-art results for various diseases in both accuracy and inference speed on large datasets. We propose two different supervised learning-based approaches with adaptive optimization methods to segment breast cancer tumours from ultrasound images. The first approach is to switch from Adam to Stochastic Gradient Descent (SGD) in between the training process. The second approach is to employ an adaptive learning rate technique to achieve a rapid training process with element-wise scaling in terms of learning rates. We have implemented our algorithms on four state-of-the-art architectures like AlexNet, VGG19, Resnet50, U-Net++ for the segmentation task of the cancer lesion in the breast ultrasound images and evaluate the Intersection Over Union (IOU) of the four aforementioned architectures using the following methods : 1) without any change, i.e., SGD optimizer, 2) with the substitution of Adam with SGD after three quarters of the total epochs and 3) with adaptive optimization technique. Despite superior training performances of recent DL-based applications on medical ultrasound images, most of the models lacked generalization and could not achieve higher accuracy on new datasets. To overcome the generalization problem, we introduce semi-supervised learning methods using transformers, which are designed for sequence-to-sequence prediction. Transformers have recently emerged as a viable alternative to natural global self-attention processes. However, due to a lack of low-level information, they may have limited translation abilities. To overcome this problem, we created a network that takes advantages of both transformers and UNet++ architectures. Transformers uses a tokenized picture patch as the input sequence for extracting global contexts from a Convolution Neural Network (CNN) feature map. To achieve exact localization, the decoder upsamples the encoded features, which are subsequently integrated with the high-resolution CNN feature maps. As an extension of our implementation, we have also employed the adaptive optimization approach on this architecture to enhance the capabilities of segmenting the breast cancer tumours from ultrasound images. The proposed method achieved better outcomes in comparison to the supervised learning based image segmentation algorithms.




Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support


Book Description

This book constitutes the refereed joint proceedings of the Third International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2017, and the 6th International Workshop on Multimodal Learning for Clinical Decision Support, ML-CDS 2017, held in conjunction with the 20th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2017, in Québec City, QC, Canada, in September 2017. The 38 full papers presented at DLMIA 2017 and the 5 full papers presented at ML-CDS 2017 were carefully reviewed and selected. The DLMIA papers focus on the design and use of deep learning methods in medical imaging. The ML-CDS papers discuss new techniques of multimodal mining/retrieval and their use in clinical decision support.




Automatic Breast Ultrasound Image Segmentation: A Survey


Book Description

Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning.




Automated breast cancer detection and classification using ultrasound images: A survey


Book Description

Breast cancer is the second leading cause of death for women all over the world. Since the cause of the disease remains unknown, early detection and diagnosis is the key for breast cancer control, and it can increase the success of treatment, save lives and reduce cost. Ultrasound imaging is one of the most frequently used diagnosis tools to detect and classify abnormalities of the breast.




Brain Tumor MRI Image Segmentation Using Deep Learning Techniques


Book Description

Brain Tumor MRI Image Segmentation Using Deep Learning Techniques offers a description of deep learning approaches used for the segmentation of brain tumors. The book demonstrates core concepts of deep learning algorithms by using diagrams, data tables and examples to illustrate brain tumor segmentation. After introducing basic concepts of deep learning-based brain tumor segmentation, sections cover techniques for modeling, segmentation and properties. A focus is placed on the application of different types of convolutional neural networks, like single path, multi path, fully convolutional network, cascade convolutional neural networks, Long Short-Term Memory - Recurrent Neural Network and Gated Recurrent Units, and more. The book also highlights how the use of deep neural networks can address new questions and protocols, as well as improve upon existing challenges in brain tumor segmentation. Provides readers with an understanding of deep learning-based approaches in the field of brain tumor segmentation, including preprocessing techniques Integrates recent advancements in the field, including the transformation of low-resolution brain tumor images into super-resolution images using deep learning-based methods, single path Convolutional Neural Network based brain tumor segmentation, and much more Includes coverage of Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN), Gated Recurrent Units (GRU) based Recurrent Neural Network (RNN), Generative Adversarial Networks (GAN), Auto Encoder based brain tumor segmentation, and Ensemble deep learning Model based brain tumor segmentation Covers research Issues and the future of deep learning-based brain tumor segmentation




An Adaptive Region Growing based on Neutrosophic Set in Ultrasound Domain for Image Segmentation


Book Description

Breast tumor segmentation in ultrasound is important for breast ultrasound (BUS) quantitative analysis and clinical diagnosis. Even this topic has been studied for a long time, it is still a challenging task to segment tumor in BUS accurately arising from difficulties of speckle noise and tissue background inconsistence. To overcome these difficulties, we formulate breast tumor segmentation as a classification problem in the neutrosophic set (NS) domain which has been previously studied for removing speckle noise and enhancing contrast in BUS images. The similarity set score and homogeneity value for each pixel have been calculated in the NS domain to characterize each pixel of BUS image. Based on that, the seed regions are selected by an adaptive Otsu-based thresholding method and morphology operations, then an adaptive region growing approach is developed for obtaining candidate tumor regions in NS domain.




Deep Learning and Convolutional Neural Networks for Medical Image Computing


Book Description

This book presents a detailed review of the state of the art in deep learning approaches for semantic object detection and segmentation in medical image computing, and large-scale radiology database mining. A particular focus is placed on the application of convolutional neural networks, with the theory supported by practical examples. Features: highlights how the use of deep neural networks can address new questions and protocols, as well as improve upon existing challenges in medical image computing; discusses the insightful research experience of Dr. Ronald M. Summers; presents a comprehensive review of the latest research and literature; describes a range of different methods that make use of deep learning for object or landmark detection tasks in 2D and 3D medical imaging; examines a varied selection of techniques for semantic segmentation using deep learning principles in medical imaging; introduces a novel approach to interleaved text and image deep mining on a large-scale radiology image database.




A Guide to Convolutional Neural Networks for Computer Vision


Book Description

Computer vision has become increasingly important and effective in recent years due to its wide-ranging applications in areas as diverse as smart surveillance and monitoring, health and medicine, sports and recreation, robotics, drones, and self-driving cars. Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems. As a result, CNNs now form the crux of deep learning algorithms in computer vision. This self-contained guide will benefit those who seek to both understand the theory behind CNNs and to gain hands-on experience on the application of CNNs in computer vision. It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks: training, regularization, and optimization of CNNs. The book also discusses a wide range of loss functions, network layers, and popular CNN architectures, reviews the different techniques for the evaluation of CNNs, and presents some popular CNN tools and libraries that are commonly used in computer vision. Further, this text describes and discusses case studies that are related to the application of CNN in computer vision, including image classification, object detection, semantic segmentation, scene understanding, and image generation. This book is ideal for undergraduate and graduate students, as no prior background knowledge in the field is required to follow the material, as well as new researchers, developers, engineers, and practitioners who are interested in gaining a quick understanding of CNN models.




Computer Aided Detection for Breast Lesion in Ultrasound and Mammography


Book Description

In the field of breast cancer imaging, traditional Computer Aided Detection (CAD) systems were designed using limited computing resources and used scanned films (poor image quality), resulting in less robust application process. Currently, with the advancements in technologies, it is possible to perform 3D imaging and also acquire high quality Full-Field Digital Mammogram (FFDM). Automated Breast Ultrasound (ABUS) has been proposed to produce a full 3D scan of the breast automatically with reduced operator dependency. When using ABUS, lesion segmentation and tracking changes over time are challenging tasks, as the 3D nature of the images make the analysis difficult and tedious for radiologists. One of the goals of this thesis is to develop a framework for breast lesion segmentation in ABUS volumes. The 3D lesion volume in combination with texture and contour analysis, could provide valuable information to assist radiologists in the diagnosis.Although ABUS volumes are of great interest, x-ray mammography is still the gold standard imaging modality used for breast cancer screening due to its fast acquisition and cost-effectiveness. Moreover, with the advent of deep learning methods based on Convolutional Neural Network (CNN), the modern CAD Systems are able to learn automatically which imaging features are more relevant to perform a diagnosis, boosting the usefulness of these systems. One of the limitations of CNNs is that they require large training datasets, which are very limited in the field of medical imaging.In this thesis, the issue of limited amount of dataset is addressed using two strategies: (i) by using image patches as inputs rather than full sized image, and (ii) use the concept of transfer learning, in which the knowledge obtained by training for one task is used for another related task (also known as domain adaptation). In this regard, firstly the CNN trained on a very large dataset of natural images is adapted to classify between mass and non-mass image patches in the Screen-Film Mammogram (SFM), and secondly the newly trained CNN model is adapted to detect masses in FFDM. The prospects of using transfer learning between natural images and FFDM is also investigated. Two public datasets CBIS-DDSM and INbreast have been used for the purpose. In the final phase of research, a fully automatic mass detection framework is proposed which uses the whole mammogram as the input (instead of image patches) and provides the localisation of the lesion within this mammogram as the output. For this purpose, OPTIMAM Mammography Image Database (OMI-DB) is used. The results obtained as part of this thesis showed higher performances compared to state-of-the-art methods, indicating that the proposed methods and frameworks have the potential to be implemented within advanced CAD systems, which can be used by radiologists in the breast cancer screening.