A Parallel Hybrid Feature Selection Approach Based on Multi-Correlation and Evolutionary Multitasking

A Parallel Hybrid Feature Selection Approach Based on Multi-Correlation and Evolutionary Multitasking

Mohamed Amine Azaiz, Djamel Amar Bensaber
Copyright: © 2023 |Pages: 23
DOI: 10.4018/IJGHPC.320475
Article PDF Download
Open access articles are freely available for download

Abstract

Particle swarm optimization (PSO) has been successfully applied to feature selection (FS) due to its efficiency and ease of implementation. Like most evolutionary algorithms, they still suffer from a high computational burden and poor generalization ability. Multifactorial optimization (MFO), as an effective evolutionary multitasking paradigm, has been widely used for solving complex problems through implicit knowledge transfer between related tasks. Based on MFO, this study proposes a PSO-based FS method to solve high-dimensional classification via information sharing between two related tasks generated from a dataset using two different measures of correlation. To be specific, two subsets of relevant features are generated using symmetric uncertainty measure and Pearson correlation coefficient, then each subset is assigned to one task. To improve runtime, the authors proposed a parallel fitness evaluation of particles under Apache Spark. The results show that the proposed FS method can achieve higher classification accuracy with a smaller feature subset in a reasonable time.
Article Preview
Top

Introduction

Big data has contributed to the complexity of analysis algorithms by increasing the dimension of data. Feature selection is a pre-processing step in the classification process, aimed at reducing the dimension of data to improve learning performance (Rong et al., 2019). Researchers have presented different approaches to selecting features (Jovi´c et al., 2015), which can be grouped into Filter-based, Wrapper-based and Hybrid-based, and Embedded-based approaches. Filter Methods uses statistical tests to select features based on their individual contribution to the prediction task. Examples of filter methods include chi-squared tests, correlation-based feature selection, and mutual information-based feature selection. Many filter methods are based on some mathematical statistics concepts to measure the degree of correlation (or statistical independence) between different features, and also between features and target class. X is a redundant feature if it has a strong correlation with another feature Y (Yu et al., 2003), In this case, we can dispense with one of the two features, since the information generated by X can be inferred from Y. An irrelevant feature is a feature that has weak or no correlation with the target class (Song et al., 2013), We can also dispense with this type of feature, because it negatively affects predictive accuracy. Correlation based filter methods are characterized by the speed of execution, but good results are not always guaranteed due to the insufficiency of a unified and comprehensive definition of statistical correlation. For example, two variables which are not linearly correlated are not necessarily independent, it is possible that they are non-linearly correlated. This is why we propose in our approach the use of multi-correlation to avoid losing some features that contain important information. Wrapper-based approaches search through the feature space by using a learning algorithm to evaluate selected features. These methods are characterized by the quality of their results in most cases, but they are challenged by their complexity and execution time. Hybrid methods use filter and wrappers approaches to combine fast execution with quality of results. Embedded methods integrate feature selection into the training process of machine learning algorithms. Examples of embedded methods include Lasso Regression and Random Forest feature importance. Hybrid Methods combine aspects of Filter, Wrapper, and Embedded methods to create a new feature selection approach. Examples of hybrid methods include Feature Selection using the combination of Recursive Feature Elimination and SVM.

Feature selection is the process of finding a subset of features in a large space, which is defined as the combinatorial optimization problem. In such cases, the use of evolutionary algorithms is among the most effective solutions. Particle swarm optimization (PSO) (Kennedy et al., 1995), as a population-based search algorithm has been widely applied for solving feature selection problems. This is because PSO has the advantages of being easy to implement and has strong global searchability. The Binary PSO algorithm is a variant of the PSO algorithm (Kennedy et al., 1997) to solve discrete optimization problems. Despite the achievements obtained by the evolutionary algorithms, they require massive computational resources to guarantee the convergence performance compared to other optimization methods. Multitasking evolutionary optimization (Gupta et al., 2017) is a recent approach based on sharing and disseminating knowledge between different and related problems (tasks) and helping solve problems during the research process. Multifactorial optimization (MFO) is an evolutionary multitasking paradigm introduced by Gupta et al. in 2015 (Gupta et al., 2017). Multifactorial PSO (MFPSO) (Feng et al., 2017) was proposed under the MFO paradigm, which effectively shares knowledge across related tasks through the operators of assortative mating and vertical cultural transmission during the research process.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 2 Issues (2023)
Volume 14: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing