Collaborative compensative transformer network for salient object detection

J Chen, H Zhang, M Gong, Z Gao - Pattern Recognition, 2024 - Elsevier
Salient object detection (SOD) is of high significance for various computer vision
applications but is a challenging task due to the complicated scenes in real-world images …

MMA: Multi-Modal Adapter for Vision-Language Models

L Yang, RY Zhang, Y Wang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Pre-trained Vision-Language Models (VLMs) have served as excellent foundation
models for transfer learning in diverse downstream tasks. However tuning VLMs for few-shot …

[HTML][HTML] WBNet: Weakly-supervised salient object detection via scribble and pseudo-background priors

Y Wang, R Wang, X He, C Lin, T Wang, Q Jia, X Fan - Pattern Recognition, 2024 - Elsevier
Weakly supervised salient object detection (WSOD) methods endeavor to boost sparse
labels to get more salient cues in various ways. Among them, an effective approach is using …

A symmetric fusion learning model for detecting visual relations and scene parsing

X Liu, X Jing, Z Zheng, W Du, X Ding… - Scientific …, 2022 - Wiley Online Library
Visual relationship detection (VRD) aims to locate objects and recognize their pairwise
relationships for parsing scene graphs. To enable a higher understanding of the visual …

A temporal Human Activity Recognition Based on Stacked Auto Encoder and Extreme Learning Machine

M Gnouma, R Ejbali, M Zaied - 2023 9th International …, 2023 - ieeexplore.ieee.org
Human Activity Recognition (HAR) is one of the most important research areas in the fields
of health and human-machine interaction. The creation of several artificial intelligence …

A Comparative Study on Performance Improvement for Camouflaged Object Detection

A Ramani, M Naik, S Shah… - … on Sustainable Computing …, 2022 - ieeexplore.ieee.org
Due to high intrinsic similarities in the foreground and background of the image in
Camouflaged objects it is hard for even humans to distinguish them from the background, let …