Patch-based separable transformer for visual recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022•ieeexplore.ieee.org
The computational complexity of transformers limits it to be widely deployed onto
frameworks for visual recognition. Recent work Dosovitskiy et al. 2021 significantly
accelerates the network processing speed by reducing the resolution at the beginning of the
network, however, it is still hard to be directly generalized onto other downstream tasks eg
object detection and segmentation like CNN. In this paper, we present a transformer-based
architecture retaining both the local and global interactions within the network, and can be …
frameworks for visual recognition. Recent work Dosovitskiy et al. 2021 significantly
accelerates the network processing speed by reducing the resolution at the beginning of the
network, however, it is still hard to be directly generalized onto other downstream tasks eg
object detection and segmentation like CNN. In this paper, we present a transformer-based
architecture retaining both the local and global interactions within the network, and can be …
The computational complexity of transformers limits it to be widely deployed onto frameworks for visual recognition. Recent work Dosovitskiy et al. 2021 significantly accelerates the network processing speed by reducing the resolution at the beginning of the network, however, it is still hard to be directly generalized onto other downstream tasks e.g.object detection and segmentation like CNN. In this paper, we present a transformer-based architecture retaining both the local and global interactions within the network, and can be transferable to other downstream tasks. The proposed architecture reforms the original full spatial self-attention into pixel-wise local attention and patch-wise global attention. Such factorization saves the computational cost while retaining the information of different granularities, which helps generate multi-scale features required by different tasks. By exploiting the factorized attention, we construct a Separable Transformer (SeT) for visual modeling. Experimental results show that SeT outperforms the previous state-of-the-art transformer-based approaches and its CNN counterparts on three major tasks including image classification, object detection and instance segmentation. 1
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果