Segmentation of Brain Tumors Using Three-Dimensional Convolutional Neural Network on MRI Images 3D MedImg-CNN

Segmentation of Brain Tumors Using Three-Dimensional Convolutional Neural Network on MRI Images 3D MedImg-CNN

Ahmed Kharrat, Mahmoud Neji
DOI: 10.4018/IJCINI.20211001.oa4
Article PDF Download
Open access articles are freely available for download

Abstract

We consider the problem of fully automatic brain tumor segmentation in MR images containing glioblastomas. We propose a three Dimensional Convolutional Neural Network (3D MedImg-CNN) approach which achieves high performance while being extremely efficient, a balance that existing methods have struggled to achieve. Our 3D MedImg-CNN is formed directly on the raw image modalities and thus learn a characteristic representation directly from the data. We propose a new cascaded architecture with two pathways that each model normal details in tumors. Fully exploiting the convolutional nature of our model also allows us to segment a complete cerebral image in one minute. The performance of the proposed 3D MedImg-CNN with CNN segmentation method is computed using dice similarity coefficient (DSC). In experiments on the 2013, 2015 and 2017 BraTS challenges datasets; we unveil that our approach is among the most powerful methods in the literature, while also being very effective.
Article Preview
Top

1. Introduction

The central nervous system’s management center the brain is responsible for executing all activities through the human body. A mass or growth of abnormal cells in the brain is called a brain tumor, some brain tumors are noncancerous (benign), and some brain tumors are cancerous (malignant), recently brain tumor became the second cause of deaths in young adults and children suffering from cancer. Central Brain Tumor Registry of the United States (CBTRUS) stated that there are 64,530 diagnosed new cases of central nervous system and initial stage of brain tumors is diagnosed since 2011. The number has exceeded 600,000 of people who live with the disease (Kharrat et al., 2015; Abraham et al., 2017). Assessing these tumors using Magnetic resonance imaging (MRI) is a widely used imaging technique, but it produces a large amount of data that prevents manual segmentation in a reasonable time, besides having significant variation from various experts without the global 3D brain structure (Kharrat & Néji, 2019). So, automatic and reliable segmentation methods are required, which led to the growing number of machine learning studies based on neuroimaging data that are aiming to both develop diagnostic tools that help brain MRI classification and automatic volume segmentation, and understand the mechanics of diseases, including the neurodegenerative ones. The goal of brain tumor segmentation is to detect the area of the brain based on texture from information in MRI images. Segmentation methods typically look for active tumor tissue (vascularized or not), necrotic tissue and edema (swelling near a tumor) by exploiting multiple magnetic resonance imaging (MRI) modalities, such as T1, T2, T1-Contrasted (T1C) and Flair. Recently, Convolutional neural networks (CNNs) (Schmidhuber, 2015) are a type of deep artificial neural networks widely used in the field of computer vision. They have been applied to many tasks, including image classification (Kharrat & Néji, 2018; Krizhevsky et al., 2012; Paul Justin et al., 2017; Schmidhuber, 2015; Simonyan & Zisserman, 2016), super-resolution (Kim et al., 2016) and semantic segmentation (Shelhamer et al., 2017). Recent publications report their usage in medical image segmentation and classification (). For instance, a novel brain tumor segmentation method is developed by Wu Deng et al. (2019) by integrating fully convolutional neural networks (FCNN) and dense micro-block difference feature (DMDF) into a unified framework so as to obtain segmentation results with appearance and spatial consistency. Firstly, these authors proposed a local feature to describe the rotation invariant property of the texture. In order to deal with the change of rotation and scale in texture image, Fisher vector encoding method is used to analyze the texture feature, which can combine with the scale information without increasing the dimension of the local feature. The obtained local features have strong robustness to rotation and gray intensity variation. Then, the non-quantifiable local feature is fused to the FCNN to perform fine boundary segmentation. Since brain tumors occupy a small portion of the image, deconvolutional layers are designed with skip connections to obtain a high quality feature map. Jijun Tong et al. (2019) Tong introduced an automatic brain tumor segmentation method using kernel sparse coding and texture feature from Fluid Attenuated Inversion Recovery (FLAIR). Initially, MRI images are pre-processed to reduce the noise and enhance the contrast. Then, sparse coding is carried out on the first and the second order statistical eigenvector extracted from the raw MRIs. The kernel dictionary learning is used to extract the non-linear features to construct two adaptive dictionaries for healthy and pathologically tissues respectively. After that, a kernel-clustering algorithm based on dictionary learning is developed to code the voxels, then the linear discrimination method is used to classify the target pixels. In the end, the flood-fill operation is used to improve the quality of segmentation. In another research, Muhammad Sajjad et al. (2019) developed a Convolutional Neural Network (CNN) based multi-grade brain tumor classification system. Firstly, tumor regions from an MR image are segmented based on a deep learning technique. Secondly, extensive data augmentation is utilized to effectively train the proposed system. Finally, a pre-trained CNN model is fine-tuned for brain tumor grade classification. An automated brain tumor segmentation algorithm using deep convolutional neural network (DCNN) is presented by Saddam Hussain et al. (2017). A patch based approach along with an inception module is used for training the deep network by extracting two co-centric patches of different sizes from the input images. Recent developments in deep neural networks such as dropout, batch normalization, non-linear activation and inception module are used to build a new Linear nexus architecture. The module overcomes the over-fitting problem arising due to scarcity of data using dropout regularization. Images are normalized and bias field corrected in the pre-processing step and then extracted patches are passed through a DCNN, which assigns an output label to the central pixel of each patch. A two-phase weighted training method is introduced and evaluated using BRATS 2013 and BRATS 2015 datasets. A similar approach was used by Guotai Wang et al. (2018), these author proposed a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. They proposed image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised or supervised. They also proposed a weighted loss function considering network and interaction-based uncertainty for the fine tuning. They applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. On the other hand, Kamnitsas et al. (2017) introduces a 3D CNN architecture designed for various segmentation tasks involving MR images of brains. The authors benchmark their approach on the BraTS (Menze et al., 2015) and ISLES (Maier et al., 2017) challenges. Their approach comprises a CNN with 3D filters and a conditional random field smoothing the output of the CNN. The authors propose dividing the input images into regions in order to address the high memory demand of 3D CNNs. Notable in Kamnitsas is the usage of an architecture consisting of two pathways. The first receives the subregion of the original image that is to be segmented, while the second receives a larger region that is downsampled to a lower resolution before being fed to the network. This enables the network to still be able to learn global features of the images.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing