IFCNN: A general image fusion framework based on convolutional neural network
In this paper, we propose a general image fusion framework based on the convolutional
neural network, named as IFCNN. Inspired by the transform-domain image fusion
algorithms, we firstly utilize two convolutional layers to extract the salient image features
from multiple input images. Afterwards, the convolutional features of multiple input images
are fused by an appropriate fusion rule (elementwise-max, elementwise-min or elementwise-
mean), which is selected according to the type of input images. Finally, the fused features …
neural network, named as IFCNN. Inspired by the transform-domain image fusion
algorithms, we firstly utilize two convolutional layers to extract the salient image features
from multiple input images. Afterwards, the convolutional features of multiple input images
are fused by an appropriate fusion rule (elementwise-max, elementwise-min or elementwise-
mean), which is selected according to the type of input images. Finally, the fused features …
Abstract
In this paper, we propose a general image fusion framework based on the convolutional neural network, named as IFCNN. Inspired by the transform-domain image fusion algorithms, we firstly utilize two convolutional layers to extract the salient image features from multiple input images. Afterwards, the convolutional features of multiple input images are fused by an appropriate fusion rule (elementwise-max, elementwise-min or elementwise-mean), which is selected according to the type of input images. Finally, the fused features are reconstructed by two convolutional layers to produce the informative fusion image. The proposed model is fully convolutional, so it could be trained in the end-to-end manner without any post-processing procedures. In order to fully train the model, we have generated a large-scale multi-focus image dataset based on the large-scale RGB-D dataset (i.e., NYU-D2), which owns ground-truth fusion images and contains more diverse and larger images than the existing datasets for image fusion. Without finetuning on other types of image datasets, the experimental results show that the proposed model demonstrates better generalization ability than the existing image fusion models for fusing various types of images, such as multi-focus, infrared-visual, multi-modal medical and multi-exposure images. Moreover, the results also verify that our model has achieved comparable or even better results compared to the state-of-the-art image fusion algorithms on four types of image datasets.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果