Multi-interactive feature learning and a full-time multi-modality benchmark for image fusion and segmentation
Multi-modality image fusion and segmentation play a vital role in autonomous driving and
robotic operation. Early efforts focus on boosting the performance for only one task, eg …
robotic operation. Early efforts focus on boosting the performance for only one task, eg …
Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion
Infrared and visible image fusion targets to provide an informative image by combining
complementary information from different sensors. Existing learning-based fusion …
complementary information from different sensors. Existing learning-based fusion …
M3net: multi-view encoding, matching, and fusion for few-shot fine-grained action recognition
Due to the scarcity of manually annotated data required for fine-grained video
understanding, few-shot fine-grained (FS-FG) action recognition has gained significant …
understanding, few-shot fine-grained (FS-FG) action recognition has gained significant …
A task-guided, implicitly-searched and metainitialized deep model for image fusion
Image fusion plays a key role in a variety of multi-sensor-based vision systems, especially
for enhancing visual quality and/or extracting aggregated features for perception. However …
for enhancing visual quality and/or extracting aggregated features for perception. However …
PAIF: Perception-aware infrared-visible image fusion for attack-tolerant semantic segmentation
Infrared and visible image fusion is a powerful technique that combines complementary
information from different modalities for downstream semantic perception tasks. Existing …
information from different modalities for downstream semantic perception tasks. Existing …
Holistic Dynamic Frequency Transformer for image fusion and exposure correction
The correction of exposure-related issues is a pivotal component in enhancing the quality of
images, offering substantial implications for various computer vision tasks. Historically, most …
images, offering substantial implications for various computer vision tasks. Historically, most …
A semantic-driven coupled network for infrared and visible image fusion
X Liu, H Huo, J Li, S Pang, B Zheng - Information Fusion, 2024 - Elsevier
In order to be adapted to high-level vision tasks, several infrared and visible image fusion
methods cascade with the downstream network to enhance the semantic information of …
methods cascade with the downstream network to enhance the semantic information of …
Probing Synergistic High-Order Interaction in Infrared and Visible Image Fusion
Infrared and visible image fusion aims to generate a fused image by integrating and
distinguishing complementary information from multiple sources. While the cross-attention …
distinguishing complementary information from multiple sources. While the cross-attention …
DPACFuse: Dual-Branch Progressive Learning for Infrared and Visible Image Fusion with Complementary Self-Attention and Convolution
Infrared and visible image fusion aims to generate a single fused image that not only
contains rich texture details and salient objects, but also facilitates downstream tasks …
contains rich texture details and salient objects, but also facilitates downstream tasks …
MFHOD: Multi-modal image fusion method based on the higher-order degradation model
J Guo, W Zhan, Y Jiang, W Ge, Y Chen, X Xu… - Expert Systems with …, 2024 - Elsevier
The task of multimodal image fusion aims to preserve the respective advantages of each
modality, such as the detailed texture information from visible light images and the salient …
modality, such as the detailed texture information from visible light images and the salient …