Attention mechanisms in computer vision: A survey
Humans can naturally and effectively find salient regions in complex scenes. Motivated by
this observation, attention mechanisms were introduced into computer vision with the aim of …
this observation, attention mechanisms were introduced into computer vision with the aim of …
Transformers in vision: A survey
Astounding results from Transformer models on natural language tasks have intrigued the
vision community to study their application to computer vision problems. Among their salient …
vision community to study their application to computer vision problems. Among their salient …
Exploring plain vision transformer backbones for object detection
We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for
object detection. This design enables the original ViT architecture to be fine-tuned for object …
object detection. This design enables the original ViT architecture to be fine-tuned for object …
Scaling up your kernels to 31x31: Revisiting large kernel design in cnns
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by
recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few …
recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few …
Vision gnn: An image is worth graph of nodes
Network architecture plays a key role in the deep learning-based computer vision system.
The widely-used convolutional neural network and transformer treat the image as a grid or …
The widely-used convolutional neural network and transformer treat the image as a grid or …
Efficientformer: Vision transformers at mobilenet speed
Abstract Vision Transformers (ViT) have shown rapid progress in computer vision tasks,
achieving promising results on various benchmarks. However, due to the massive number of …
achieving promising results on various benchmarks. However, due to the massive number of …
Maxim: Multi-axis mlp for image processing
Recent progress on Transformers and multi-layer perceptron (MLP) models provide new
network architectural designs for computer vision tasks. Although these models proved to be …
network architectural designs for computer vision tasks. Although these models proved to be …
Metaformer is actually what you need for vision
Transformers have shown great potential in computer vision tasks. A common belief is their
attention-based token mixer module contributes most to their competence. However, recent …
attention-based token mixer module contributes most to their competence. However, recent …
Unext: Mlp-based rapid medical image segmentation network
JMJ Valanarasu, VM Patel - … conference on medical image computing and …, 2022 - Springer
UNet and its latest extensions like TransUNet have been the leading medical image
segmentation methods in recent years. However, these networks cannot be effectively …
segmentation methods in recent years. However, these networks cannot be effectively …
Davit: Dual attention vision transformers
In this work, we introduce Dual Attention Vision Transformers (DaViT), a simple yet effective
vision transformer architecture that is able to capture global context while maintaining …
vision transformer architecture that is able to capture global context while maintaining …