PMID: 19965249 DOI: 10.1109/IEMBS.2009.5335180

Abstract

We present an efficient method for the fusion of medical captured images using different modalities that enhances the original images and combines the complementary information of the various modalities. The contourlet transform has mainly been employed as a fusion technique for images obtained from equal or different modalities. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in dual-tree complex contourlet transform (DT-CCT) by incorporating directional filter banks (DFB) into the DT-CWT. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. To improve the fused image quality, we propose a new method for fusion rules based on principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency components, PCA method is adopted and for high frequency components, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency components. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.