Image fusion meets deep learning: A survey and perspective

H Zhang, H Xu, X Tian, J Jiang, J Ma - Information Fusion, 2021 - Elsevier
Image fusion, which refers to extracting and then combining the most meaningful information
from different source images, aims to generate a single image that is more informative and …

Deep learning with radiomics for disease diagnosis and treatment: challenges and potential

X Zhang, Y Zhang, G Zhang, X Qiu, W Tan, X Yin… - Frontiers in …, 2022 - frontiersin.org
The high-throughput extraction of quantitative imaging features from medical images for the
purpose of radiomic analysis, ie, radiomics in a broad sense, is a rapidly developing and …

A multiscale framework with unsupervised learning for remote sensing image registration

Y Ye, T Tang, B Zhu, C Yang, B Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Registration for multisensor or multimodal image pairs with a large degree of distortions is a
fundamental task for many remote sensing applications. To achieve accurate and low-cost …

Murf: Mutually reinforcing multi-modal image registration and fusion

H Xu, J Yuan, J Ma - IEEE transactions on pattern analysis and …, 2023 - ieeexplore.ieee.org
Existing image fusion methods are typically limited to aligned source images and have to
“tolerate” parallaxes when images are unaligned. Simultaneously, the large variances …

Deep fusion transformer network with weighted vector-wise keypoints voting for robust 6d object pose estimation

J Zhou, K Chen, L Xu, Q Dou… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
One critical challenge in 6D object pose estimation from a single RGBD image is efficient
integration of two different modalities, ie, color and depth. In this work, we tackle this problem …

A robust multimodal remote sensing image registration method and system using steerable filters with first-and second-order gradients

Y Ye, B Zhu, T Tang, C Yang, Q Xu, G Zhang - ISPRS Journal of …, 2022 - Elsevier
Co-registration of multimodal remote sensing (RS) images (eg, optical, infrared, LiDAR, and
SAR) is still an ongoing challenge because of nonlinear radiometric differences (NRD) and …

Causal knowledge fusion for 3D cross-modality cardiac image segmentation

S Guo, X Liu, H Zhang, Q Lin, L Xu, C Shi, Z Gao… - Information …, 2023 - Elsevier
Abstract Three-dimensional (3D) cross-modality cardiac image segmentation is critical for
cardiac disease diagnosis and treatment. However, it confronts the challenge of modality …

Shape-Former: Bridging CNN and Transformer via ShapeConv for multimodal image matching

J Chen, X Chen, S Chen, Y Liu, Y Rao, Y Yang… - Information …, 2023 - Elsevier
As with any data fusion task, the front-end of the pipeline for image fusion, aiming to collect
multitudinous physical properties from multimodal images taken by different types of …

Omnivec: Learning robust representations with cross modal sharing

S Srivastava, G Sharma - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
Majority of research in learning based methods has been towards designing and training
networks for specific tasks. However, many of the learning based tasks, across modalities …

CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition

H Xu, M Gong, X Tian, J Huang, J Ma - Computer Vision and Image …, 2022 - Elsevier
In this paper, we propose a novel method for visible and infrared image fusion by
decomposing feature information, which is termed as CUFD. It adopts two pairs of encoder …