Equivariant multi-modality image fusion

Z Zhao, H Bai, J Zhang, Y Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Multi-modality image fusion is a technique that combines information from different sensors
or modalities enabling the fused image to retain complementary features from each modality …

Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion

Z Zhao, H Bai, J Zhang, Y Zhang, S Xu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Multi-modality (MM) image fusion aims to render fused images that maintain the merits of
different modalities, eg, functional highlight and detailed textures. To tackle the challenge in …

Bi-level dynamic learning for jointly multi-modality image fusion and beyond

Z Liu, J Liu, G Wu, L Ma, X Fan, R Liu - arXiv preprint arXiv:2305.06720, 2023 - arxiv.org
Recently, multi-modality scene perception tasks, eg, image fusion and scene understanding,
have attracted widespread attention for intelligent vision systems. However, early efforts …

Multi-interactive feature learning and a full-time multi-modality benchmark for image fusion and segmentation

J Liu, Z Liu, G Wu, L Ma, R Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Multi-modality image fusion and segmentation play a vital role in autonomous driving and
robotic operation. Early efforts focus on boosting the performance for only one task, eg …

Multi-modal gated mixture of local-to-global experts for dynamic image fusion

B Cao, Y Sun, P Zhu, Q Hu - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Infrared and visible image fusion aims to integrate comprehensive information from multiple
sources to achieve superior performances on various practical tasks, such as detection, over …

Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and …

L Tang, H Zhang, H Xu, J Ma - Information Fusion, 2023 - Elsevier
Image fusion aims to integrate complementary characteristics of source images into a single
fused image that better serves human visual observation and machine vision perception …

Self-supervised fusion for multi-modal medical images via contrastive auto-encoding and convolutional information exchange

Y Zhang, R Nie, J Cao, C Ma - IEEE Computational Intelligence …, 2023 - ieeexplore.ieee.org
This paper proposes a self-supervised framework based on a contrastive auto-encoding and
convolutional information exchange for multi-modal medical fusion tasks. It is well known …

U2Fusion: A unified unsupervised image fusion network

H Xu, J Ma, J Jiang, X Guo… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
This study proposes a novel unified and unsupervised end-to-end image fusion network,
termed as U2Fusion, which is capable of solving different fusion problems, including multi …

Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion

J Liu, R Lin, G Wu, R Liu, Z Luo, X Fan - International Journal of Computer …, 2024 - Springer
Infrared and visible image fusion targets to provide an informative image by combining
complementary information from different sensors. Existing learning-based fusion …

Probing Synergistic High-Order Interaction in Infrared and Visible Image Fusion

N Zheng, M Zhou, J Huang, J Hou… - Proceedings of the …, 2024 - openaccess.thecvf.com
Infrared and visible image fusion aims to generate a fused image by integrating and
distinguishing complementary information from multiple sources. While the cross-attention …