Hyper-parameter tuning of a decision tree induction algorithm

RG Mantovani, T Horváth, R Cerri… - 2016 5th Brazilian …, 2016 - ieeexplore.ieee.org
2016 5th Brazilian Conference on Intelligent Systems (BRACIS), 2016ieeexplore.ieee.org
Supervised classification is the most studied task in Machine Learning. Among the many
algorithms used in such task, Decision Tree algorithms are a popular choice, since they are
robust and efficient to construct. Moreover, they have the advantage of producing
comprehensible models and satisfactory accuracy levels in several application domains.
Like most of the Machine Leaning methods, these algorithms have some hyper-parameters
whose values directly affect the performance of the induced models. Due to the high number …
Supervised classification is the most studied task in Machine Learning. Among the many algorithms used in such task, Decision Tree algorithms are a popular choice, since they are robust and efficient to construct. Moreover, they have the advantage of producing comprehensible models and satisfactory accuracy levels in several application domains. Like most of the Machine Leaning methods, these algorithms have some hyper-parameters whose values directly affect the performance of the induced models. Due to the high number of possibilities for these hyper-parameter values, several studies use optimization techniques to find a good set of solutions in order to produce classifiers with good predictive performance. This study investigates how sensitive decision trees are to a hyper-parameter optimization process. Four different tuning techniques were explored to adjust J48 Decision Tree algorithm hyper-parameters. In total, experiments using 102 heterogeneous datasets analyzed the tuning effect on the induced models. The experimental results show that even presenting a low average improvement over all datasets, in most of the cases the improvement is statistically significant.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果