Title:Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization
VOLUME: 16 ISSUE: 10
Author(s):Shuaiqi Liu, Lu Yin, Siyu Miao, Jian Ma, Shuai Cong* and Shaohai Hu
Affiliation:College of Electronic and Information Engineering, Hebei University, Baoding Hebei, College of Electronic and Information Engineering, Hebei University, Baoding Hebei, College of Electronic and Information Engineering, Hebei University, Baoding Hebei, College of Electronic and Information Engineering, Hebei University, Baoding Hebei, Industrial and Commercial College, Hebei University, Baoding Hebei, College of Computer and Information, Beijing Jiaotong University, Beijing
Keywords:Medical image fusion, rolling guidance filter, nuclear norm minimization, shared similarity patches, convolutional
neural network, deep learning.
Abstract:
Background: Medical image fusion is very important for the diagnosis and treatment of
diseases. In recent years, there have been a number of different multi-modal medical image fusion
algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently.
Recently, nuclear norm minimization and deep learning have been used effectively in image
processing.
Methods: A multi-modality medical image fusion method using a rolling guidance filter (RGF) with
a convolutional neural network (CNN) based feature mapping and nuclear norm minimization
(NNM) is proposed. At first, we decompose medical images to base layer components and detail
layer components by using RGF. In the next step, we get the basic fused image through the pretrained
CNN model. The CNN model with pre-training is used to obtain the significant characteristics
of the base layer components. And we can compute the activity level measurement from the regional
energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we
use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated
into the fused result.
Results: From the comparison with the most advanced fusion algorithms, the results of experiments
indicate that this fusion algorithm has the best effect in visual evaluation and objective standard.
Conclusion: The fusion algorithm using RGF and CNN-based feature mapping, combined with
NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.