Image fusion meets deep learning: A survey and perspective
Image fusion, which refers to extracting and then combining the most meaningful information
from different source images, aims to generate a single image that is more informative and …
from different source images, aims to generate a single image that is more informative and …
Deep learning with radiomics for disease diagnosis and treatment: challenges and potential
The high-throughput extraction of quantitative imaging features from medical images for the
purpose of radiomic analysis, ie, radiomics in a broad sense, is a rapidly developing and …
purpose of radiomic analysis, ie, radiomics in a broad sense, is a rapidly developing and …
A multiscale framework with unsupervised learning for remote sensing image registration
Registration for multisensor or multimodal image pairs with a large degree of distortions is a
fundamental task for many remote sensing applications. To achieve accurate and low-cost …
fundamental task for many remote sensing applications. To achieve accurate and low-cost …
Murf: Mutually reinforcing multi-modal image registration and fusion
Existing image fusion methods are typically limited to aligned source images and have to
“tolerate” parallaxes when images are unaligned. Simultaneously, the large variances …
“tolerate” parallaxes when images are unaligned. Simultaneously, the large variances …
Deep fusion transformer network with weighted vector-wise keypoints voting for robust 6d object pose estimation
One critical challenge in 6D object pose estimation from a single RGBD image is efficient
integration of two different modalities, ie, color and depth. In this work, we tackle this problem …
integration of two different modalities, ie, color and depth. In this work, we tackle this problem …
A robust multimodal remote sensing image registration method and system using steerable filters with first-and second-order gradients
Co-registration of multimodal remote sensing (RS) images (eg, optical, infrared, LiDAR, and
SAR) is still an ongoing challenge because of nonlinear radiometric differences (NRD) and …
SAR) is still an ongoing challenge because of nonlinear radiometric differences (NRD) and …
Causal knowledge fusion for 3D cross-modality cardiac image segmentation
Abstract Three-dimensional (3D) cross-modality cardiac image segmentation is critical for
cardiac disease diagnosis and treatment. However, it confronts the challenge of modality …
cardiac disease diagnosis and treatment. However, it confronts the challenge of modality …
Shape-Former: Bridging CNN and Transformer via ShapeConv for multimodal image matching
As with any data fusion task, the front-end of the pipeline for image fusion, aiming to collect
multitudinous physical properties from multimodal images taken by different types of …
multitudinous physical properties from multimodal images taken by different types of …
Omnivec: Learning robust representations with cross modal sharing
S Srivastava, G Sharma - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
Majority of research in learning based methods has been towards designing and training
networks for specific tasks. However, many of the learning based tasks, across modalities …
networks for specific tasks. However, many of the learning based tasks, across modalities …
CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition
In this paper, we propose a novel method for visible and infrared image fusion by
decomposing feature information, which is termed as CUFD. It adopts two pairs of encoder …
decomposing feature information, which is termed as CUFD. It adopts two pairs of encoder …