Weight-sharing neural architecture search: A battle to shrink the optimization gap
Neural architecture search (NAS) has attracted increasing attention. In recent years,
individual search methods have been replaced by weight-sharing search methods for higher …
individual search methods have been replaced by weight-sharing search methods for higher …
Neural architecture transfer
Neural architecture search (NAS) has emerged as a promising avenue for automatically
designing task-specific neural networks. Existing NAS approaches require one complete …
designing task-specific neural networks. Existing NAS approaches require one complete …
munet: Evolving pretrained deep neural networks into scalable auto-tuning multitask systems
A Gesmundo, J Dean - arXiv preprint arXiv:2205.10937, 2022 - arxiv.org
Most uses of machine learning today involve training a model from scratch for a particular
task, or sometimes starting with a model pretrained on a related task and then fine-tuning on …
task, or sometimes starting with a model pretrained on a related task and then fine-tuning on …
Two-stage evolutionary neural architecture search for transfer learning
Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many
image classification tasks. However, training a deep CNN requires a massive amount of …
image classification tasks. However, training a deep CNN requires a massive amount of …
A continual development methodology for large-scale multitask dynamic ML systems
A Gesmundo - arXiv preprint arXiv:2209.07326, 2022 - arxiv.org
The traditional Machine Learning (ML) methodology requires to fragment the development
and experimental process into disconnected iterations whose feedback is used to guide …
and experimental process into disconnected iterations whose feedback is used to guide …
Hyperstar: Task-aware hyperparameters for deep networks
While deep neural networks excel in solving visual recognition tasks, they require significant
effort to find hyperparameters that make them work optimally. Hyperparameter Optimization …
effort to find hyperparameters that make them work optimally. Hyperparameter Optimization …
Automl using metadata language embeddings
As a human choosing a supervised learning algorithm, it is natural to begin by reading a text
description of the dataset and documentation for the algorithms you might use. We …
description of the dataset and documentation for the algorithms you might use. We …
A framework for exploring and modeling neural architecture search methods
P Radiuk, N Hrypynska - 2020 - elar.khmnu.edu.ua
Анотація For the past years, many researchers and engineers have been developing and
optimising deep neural networks (DNN). The process of neural architecture design and …
optimising deep neural networks (DNN). The process of neural architecture design and …
HyperSTAR: Task-Aware Hyperparameter Recommendation for Training and Compression
Hyperparameter optimization (HPO) methods alleviate the significant effort required to
obtain hyperparameters that perform optimally on visual learning problems. Existing …
obtain hyperparameters that perform optimally on visual learning problems. Existing …
Multipath agents for modular multitask ml systems
A Gesmundo - arXiv preprint arXiv:2302.02721, 2023 - arxiv.org
A standard ML model is commonly generated by a single method that specifies aspects such
as architecture, initialization, training data and hyperparameters configuration. The …
as architecture, initialization, training data and hyperparameters configuration. The …