[HTML][HTML] Deep learning in electron microscopy

JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …

Optimization for deep learning: An overview

RY Sun - Journal of the Operations Research Society of China, 2020 - Springer
Optimization is a critical component in deep learning. We think optimization for neural
networks is an interesting topic for theoretical research due to various reasons. First, its …

Mvitv2: Improved multiscale vision transformers for classification and detection

Y Li, CY Wu, H Fan, K Mangalam… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified architecture for
image and video classification, as well as object detection. We present an improved version …

Segmenter: Transformer for semantic segmentation

R Strudel, R Garcia, I Laptev… - Proceedings of the …, 2021 - openaccess.thecvf.com
Image segmentation is often ambiguous at the level of individual image patches and
requires contextual information to reach label consensus. In this paper we introduce …

Multiscale vision transformers

H Fan, B Xiong, K Mangalam, Y Li… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract We present Multiscale Vision Transformers (MViT) for video and image recognition,
by connecting the seminal idea of multiscale feature hierarchies with transformer models …

Transreid: Transformer-based object re-identification

S He, H Luo, P Wang, F Wang, H Li… - Proceedings of the …, 2021 - openaccess.thecvf.com
Extracting robust feature representation is one of the key challenges in object re-
identification (ReID). Although convolution neural network (CNN)-based methods have …

Training data-efficient image transformers & distillation through attention

H Touvron, M Cord, M Douze, F Massa… - International …, 2021 - proceedings.mlr.press
Recently, neural networks purely based on attention were shown to address image
understanding tasks such as image classification. These high-performing vision …

On lazy training in differentiable programming

L Chizat, E Oyallon, F Bach - Advances in neural …, 2019 - proceedings.neurips.cc
In a series of recent theoretical works, it was shown that strongly over-parameterized neural
networks trained with gradient-based methods could converge exponentially fast to zero …

Semask: Semantically masked transformers for semantic segmentation

J Jain, A Singh, N Orlov, Z Huang, J Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Finetuning a pretrained backbone in the encoder part of an image transformer network has
been the traditional approach for the semantic segmentation task. However, such an …

Salvaging federated learning by local adaptation

T Yu, E Bagdasaryan, V Shmatikov - arXiv preprint arXiv:2002.04758, 2020 - arxiv.org
Federated learning (FL) is a heavily promoted approach for training ML models on sensitive
data, eg, text typed by users on their smartphones. FL is expressly designed for training on …