[HTML][HTML] Neural architecture search: A contemporary literature review for computer vision applications
M Poyser, TP Breckon - Pattern Recognition, 2024 - Elsevier
Abstract Deep Neural Networks have received considerable attention in recent years. As the
complexity of network architecture increases in relation to the task complexity, it becomes …
complexity of network architecture increases in relation to the task complexity, it becomes …
Vitas: Vision transformer architecture search
Vision transformers (ViTs) inherited the success of NLP but their structures have not been
sufficiently investigated and optimized for visual tasks. One of the simplest solutions is to …
sufficiently investigated and optimized for visual tasks. One of the simplest solutions is to …
Quantum circuit architecture search for variational quantum algorithms
Variational quantum algorithms (VQAs) are expected to be a path to quantum advantages
on noisy intermediate-scale quantum devices. However, both empirical and theoretical …
on noisy intermediate-scale quantum devices. However, both empirical and theoretical …
Agree to disagree: Adaptive ensemble knowledge distillation in gradient space
Distilling knowledge from an ensemble of teacher models is expected to have a more
promising performance than that from a single one. Current methods mainly adopt a vanilla …
promising performance than that from a single one. Current methods mainly adopt a vanilla …
Nas-ood: Neural architecture search for out-of-distribution generalization
Recent advances on Out-of-Distribution (OoD) generalization reveal the robustness of deep
learning models against distribution shifts. However, existing works focus on OoD …
learning models against distribution shifts. However, existing works focus on OoD …
Prioritized architecture sampling with monto-carlo tree search
One-shot neural architecture search (NAS) methods significantly reduce the search cost by
considering the whole search space as one network, which only needs to be trained once …
considering the whole search space as one network, which only needs to be trained once …
Towards improving the consistency, efficiency, and flexibility of differentiable neural architecture search
Most differentiable neural architecture search methods construct a super-net for search and
derive a target-net as its sub-graph for evaluation. There exists a significant gap between the …
derive a target-net as its sub-graph for evaluation. There exists a significant gap between the …
Gradient descent effects on differential neural architecture search: A survey
Gradient Descent, an effective way to search for the local minimum of a function, can
minimize training and validation loss of neural architectures and also be incited in an …
minimize training and validation loss of neural architectures and also be incited in an …
K-shot nas: Learnable weight-sharing for nas with k-shot supernets
In one-shot weight sharing for NAS, the weights of each operation (at each layer) are
supposed to be identical for all architectures (paths) in the supernet. However, this rules out …
supposed to be identical for all architectures (paths) in the supernet. However, this rules out …
Zarts: On zero-order optimization for neural architecture search
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS
due to its high efficiency. It introduces trainable architecture parameters to represent the …
due to its high efficiency. It introduces trainable architecture parameters to represent the …