A Hybrid Multimodal Medical Image Fusion Technique for CT and MRI Brain Images

A Hybrid Multimodal Medical Image Fusion Technique for CT and MRI Brain Images

Leena Chandrashekar, Sreedevi A.
Copyright: © 2018 |Pages: 15
DOI: 10.4018/IJCVIP.2018070101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Estimating the type, size, location and spread of a brain tumor is vital in the diagnosis and treatment of brain cancer. Fused CT and MRI brain images assist in faster detection and diagnosis of brain tumors. They provide superior results in comparison to individual CT or MRI images. Multiscale transforms (MSTs) are widely used in fusing multimodal images like CT and MRI. However, they have a few drawbacks like reduced contrast, poor edge detection, redundancy and high computation time. This article describes how MSTs coupled with sparse representation (SR) aims to overcome the drawbacks. Non-Subsampled Contourlet Transform (NSCT) is widely used on MSTs for fusing multifocal images. Therefore, a novel technique using NSCT and SR is proposed for better quality fused CT and MRI images. The experimental results show superior performance.
Article Preview
Top

Introduction

Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and Single-Photon Emission Tomography (SPECT) are widely accepted imaging techniques in identification of the Glioblastoma or brain tumors. It is important to note that, detection of the intense and irregular thick margin of tumor cells is by means of CT images. The MRI images indicate the hypo or isointense mass within the white matter of the brain cells. On the contrary, PET and SPECT images specify the growth or spread of the brain tumor cells and 3D view of the tumor respectively (Gaillard et al). Glioblastomas are mostly aggressive tumors found in adult brain. They are generally resistant to therapy and have very poor prognosis. Therefore, a single imaging technique is never sufficient to assess them. Moreover, each of these imaging techniques indicate complementary information of the tumors and surrounding cells, which can be challenge for radiologists to analyze the images and provide diagnosis. Although, MRI imaging is the golden technique to detect brain tumors, sometimes it becomes a demanding task, as the tumors look similar to brain cells. This calls for the need for Multimodal Medical image fusion (Hies et al., 2011). Recent developments indicate fusion of multimodal images like CT and MRI images enhance the accuracy of detection and diagnosis of brain tumors. In general, Multimodal medical image fusion is a process of combining two or more complementary images, which are obtained from different imaging modalities with mostly different resolutions, acquired at different times to provide a single comprehensive image (James and Dasarathy, 2014).

The implementation of Multimodal Medical Image fusion is in the spatial domain or in transform domain. In spatial domain, corresponding spatial pixels from images like CT, MRI or PET combine, by means of fusion rules. However, the quality of the fused images is low, due to poor edge and contour detection and blurring. This is the main drawback with spatial domain. In general, edge and contour representation are vital analysis medical images as they characterize the tumor size or outline of the tumor. Various Multi Scale Transforms (MST) like Discrete Wavelet Transforms (DWT) (Naidu and Raol, 2008; Rahman et al., 2010), Laplacian Transform (LP) (Burt and Adelson 1983; Tan et al., 2013; Sahu et al., 2014) and Contourlet Transforms (CTT) (Do and Vetterli, 2005) fuse multimodal images in the transform domain. These transforms consist of low pass and high pass filters, which decompose the source images to low frequency and high frequency images. These images are down-sampled by a factor of two and therefore called as sub-images. Different fusion rules combine the high frequency and low frequency sub images. The fusion rules range from simple addition, averaging or weighted averaging. Lastly, a similar filtering and up-sampling process generates the fused image. This process is known as 1-level decomposition. For multilevel decomposition, the low pass sub image further decomposes using the above process to obtain the 2-level or 3-level image fusion and so on. In fact, multi-level MST decomposition greatly influences the quality in fused images (Serikawa et al., 2012). Higher levels provide better results than single level decomposition.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing