Target Sentiment Analysis Ensemble for Product Review Classification

Target Sentiment Analysis Ensemble for Product Review Classification

Rhoda Viviane Achieng Ogutu, Richard M. Rimiru, Calvins Otieno
Copyright: © 2022 |Pages: 13
DOI: 10.4018/JITR.299382
Article PDF Download
Open access articles are freely available for download

Abstract

Abstract— Machine learning can be used to provide systems the ability to automatically learn and improve from experiences without being explicitly programmed. It is fundamentally a multidisciplinary field that draws on results from Artificial intelligence, probability and statistics, information theory and analysis, among other fields that impact the field of Machine Learning. Ensemble methods are techniques that can be used to improve the predictive ability of a Machine Learning model. An ensemble comprises of individually trained classifiers whose predictions are combined when classifying instances. Some of the currently popular ensemble methods include Boosting, Bagging and Stacking. In this paper, we review these methods and demonstrate why ensembles can often perform better than single models. Additionally, some new experiments are presented to demonstrate the computational ability of Stacking approach.
Article Preview
Top

Introduction

There are various methods of constructing ensembles with different classifiers for data science and analysis. However, just as Wolpert and Macready (1997) puts it, different learning methods can be developed to solve different classification problems. Generally speaking, according to Seijo-Pardo, Porto-Díaz, Bolón-Canedo, and Alonso-Betanzos (2017) there are two kinds of ensembles, homogeneous ensembles and heterogeneous ensembles. Homogeneous ensembles are ensembles where all classifiers are of same type or family, while heterogeneous ensembles are ensembles where the classifiers are of different type or family, that is, diverse (Abuassba, Zhang, Luo, Shaheryar, & Ali, 2017).

Rokach (2005) explains that there are several factors that can be used to differentiate ensemble methods in different dimensions. These include factors to do with Inter-classifier relationship, classifier combining methods, classifier diversity generator and Ensemble size. However, classification can be based on two main dimensions, Inter-classifier relationship and Combining Methods (Araque, Corcuera-Platas, Sánchez-Rada, & Iglesias, 2017). Inter-classifier relationship refer to the methods of achieving learning process, which include Sequential and Concurrent methods (Araque et al., 2017). In Sequential approaches, there is an interaction between the learning runs and it is possible to take advantage of knowledge generated in previous iterations to guide the learning in the next iterations. This approach includes techniques such as Model-guided Instance Selection where the classifiers that were constructed in previous iterations are used for manipulating the training set for the next iteration. Such model-guided algorithms include Boosting, Uncertainty Sampling among others. Sequential approaches also includes Incremental Batch Learning, where the classifier produced in one iteration is given as “prior knowledge” to the learning algorithm in the following iteration (along with the subsample of that iteration). The learning algorithm uses the current subsample to evaluate the former classifier, and uses the former one for building the next classifier. The classifier constructed at the last iteration is chosen as the final classifier (Rokach, 2005). On the other hand, in the Concurrent ensemble approaches the original dataset is partitioned into several subsets from which multiple classifiers are induced concurrently. The subsets may be disjoint (mutually exclusive) or overlapping after which a combining procedure is applied in order to produce a single classification for a given instance. Since the method for combining the results of induced classifiers is usually independent of the induction algorithms, it can be used with different inducers at each subset. Concurrent methods aim either at improving the predictive power of classifiers or decreasing the total execution time. Algorithms that can be implemented in this approach include Bagging, Cross-validated Committees among others (Rokach, 2005).

The second dimension Combining Methods include simple multiple classifier combination and Meta combining methods. The simple combining techniques are best suited for classification problems where the individual classifiers perform the same task and have comparable success. However, these combiners are more vulnerable to outliers and to unevenly performing classifiers. The simple combining techniques include techniques such as Uniform Voting where each classifier has the same weight, while a classification of an unlabelled instance is performed according to the class that obtains the highest number of votes (Rokach, 2005). In addition, in majority voting every classifier makes a prediction (votes) for each test instance and the final output prediction is the one that receives more than half of the votes (majority). If none of the predictions get more than half of the votes, it can be assumed that the ensemble method could not make a stable prediction for this instance (Demir, 2015). On the other hand, the Meta learning technique means that learning is from the classifiers produced by the base learners (inducers) and from the classifications of these classifiers on training data. This includes techniques such as Stacking and Arbiter Trees (Rokach, 2005).

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing