作者
Alberto Marchisio, Muhammad Abdullah Hanif, Semeen Rehman, Maurizio Martina, Muhammad Shafique
发表日期
2018/10/27
期刊
arXiv preprint arXiv:1811.03980
简介
Activation functions influence behavior and performance of DNNs. Nonlinear activation functions, like Rectified Linear Units (ReLU), Exponential Linear Units (ELU) and Scaled Exponential Linear Units (SELU), outperform the linear counterparts. However, selecting an appropriate activation function is a challenging problem, as it affects the accuracy and the complexity of the given DNN. In this paper, we propose a novel methodology to automatically select the best-possible activation function for each layer of a given DNN, such that the overall DNN accuracy, compared to considering only one type of activation function for the whole DNN, is improved. However, an associated scientific challenge in exploring all the different configurations of activation functions would be time and resource-consuming. Towards this, our methodology identifies the Evaluation Points during learning to evaluate the accuracy in an intermediate step of training and to perform early termination by checking the accuracy gradient of the learning curve. This helps in significantly reducing the exploration time during training. Moreover, our methodology selects, for each layer, the dropout rate that optimizes the accuracy. Experiments show that we are able to achieve on average 7% to 15% Relative Error Reduction on MNIST, CIFAR-10 and CIFAR-100 benchmarks, with limited performance and power penalty on GPUs.
引用总数
20202021202220237326
学术搜索中的文章