Sed: Semantic-aware discriminator for image super-resolution
Abstract Generative Adversarial Networks (GANs) have been widely used to recover vivid
textures in image super-resolution (SR) tasks. In particular one discriminator is utilized to …
textures in image super-resolution (SR) tasks. In particular one discriminator is utilized to …
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …
effectiveness in addressing downstream image recognition tasks. However recent works …
Not all prompts are secure: A switchable backdoor attack against pre-trained vision transfomers
Given the power of vision transformers a new learning paradigm pre-training and then
prompting makes it more efficient and effective to address downstream visual recognition …
prompting makes it more efficient and effective to address downstream visual recognition …
Parameter-efficient and memory-efficient tuning for vision transformer: a disentangled approach
Recent works on parameter-efficient transfer learning (PETL) show the potential to adapt a
pre-trained Vision Transformer to downstream recognition tasks with only a few learnable …
pre-trained Vision Transformer to downstream recognition tasks with only a few learnable …
[HTML][HTML] Few-Shot Image Classification of Crop Diseases Based on Vision–Language Models
Y Zhou, H Yan, K Ding, T Cai, Y Zhang - Sensors, 2024 - mdpi.com
Accurate crop disease classification is crucial for ensuring food security and enhancing
agricultural productivity. However, the existing crop disease classification algorithms …
agricultural productivity. However, the existing crop disease classification algorithms …
Boostadapter: Improving test-time adaptation via regional bootstrapping
Adaptation of pretrained vision-language models such as CLIP to various downstream tasks
have raised great interest in recent researches. Previous works have proposed a variety of …
have raised great interest in recent researches. Previous works have proposed a variety of …
MePT: Multi-Representation Guided Prompt Tuning for Vision-Language Model
Recent advancements in pre-trained Vision-Language Models (VLMs) have highlighted the
significant potential of prompt tuning for adapting these models to a wide range of …
significant potential of prompt tuning for adapting these models to a wide range of …
BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping
Adaptation of pretrained vision-language models such as CLIP to various downstream tasks
have raised great interest in recent researches. Previous works have proposed a variety of …
have raised great interest in recent researches. Previous works have proposed a variety of …