A new fusion of grey wolf optimizer algorithm with a two-phase mutation for feature selection

M Abdel-Basset, D El-Shahat, I El-Henawy… - Expert Systems with …, 2020 - Elsevier
M Abdel-Basset, D El-Shahat, I El-Henawy, VHC De Albuquerque, S Mirjalili
Expert Systems with Applications, 2020Elsevier
Because of their high dimensionality, dealing with large datasets can hinder the data mining
process. Thus, the feature selection is a pre-process mandatory phase for reducing the
dimensionality of datasets through using the most informative features and at the same time
maximizing the classification accuracy. This paper proposes a new Grey Wolf Optimizer
algorithm integrated with a Two-phase Mutation to solve the feature selection for
classification problems based on the wrapper methods. The sigmoid function is used to …
Abstract
Because of their high dimensionality, dealing with large datasets can hinder the data mining process. Thus, the feature selection is a pre-process mandatory phase for reducing the dimensionality of datasets through using the most informative features and at the same time maximizing the classification accuracy. This paper proposes a new Grey Wolf Optimizer algorithm integrated with a Two-phase Mutation to solve the feature selection for classification problems based on the wrapper methods. The sigmoid function is used to transform the continuous search space to the binary one in order to match the binary nature of the feature selection problem. The two-phase mutation enhances the exploitation capability of the algorithm. The purpose of the first mutation phase is to reduce the number of selected features while preserving high classification accuracy. The purpose of the second mutation phase is to attempt to add more informative features that increase the classification accuracy. As the mutation phase can be time-consuming, the two-phase mutation can be done with a small probability. The wrapper methods can give high-quality solutions so we use one of the most famous wrapper methods which called k-Nearest Neighbor (k-NN) classifier. The Euclidean distance is computed to search for the k-NN. Each dataset is split into training and testing data using K-fold cross-validation to overcome the overfitting problem. Several comparisons with the most famous and modern algorithms such as flower algorithm, particle swarm optimization algorithm, multi-verse optimizer algorithm, whale optimization algorithm, and bat algorithm are done. The experiments are done using 35 datasets. Statistical analyses are made to prove the effectiveness of the proposed algorithm and its outperformance.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果