Ensemble Methods and Their Applications

Ensemble Methods and Their Applications

Copyright: © 2023 |Pages: 16
DOI: 10.4018/978-1-7998-9220-5.ch109
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. This article mainly focused on distinguishing between non-generative ensemble methods and generative ensembles. Non-generative ensemble methods embrace a large set of different approaches to combine learning machines. The subdivisions of non-generative strategies are ensemble fusion and ensemble selection methods. Generative ensemble methods generate sets of base learners acting on the base learning algorithm or on the structure of the data set and try to actively improve diversity and accuracy of the base learners. The subdivisions of generative strategies are resampling methods, feature selection methods, and output coding methods. The main aim of this article is to explain the detailed characteristics of each ensemble method and to provide an overview of the main application areas of ensemble methods.
Chapter Preview
Top

Introduction

Considering the variety of ensemble techniques and the large number of combination schemes proposed in literature, it is not surprising that a very large number of ensemble methods and algorithms are now available to the research community. To help the researchers and practitioners to get their bearings and to develop new methods and techniques, several taxonomies of ensemble methods have been proposed. Indeed, ensemble methods are characterized by two basic features: 1) the algorithms by which different base learners are combined; 2) the techniques by which different and diverse base learners are generated. This chapter basically distinguishes between non-generative ensemble methods that mainly rely on the former feature of ensemble methods, and generative ensembles that mainly focus on the latter. It is worth noting that the “combination” and the “generation” of base learners are somehow both present in all ensemble methods: the distinction between these two large classes depends on the predominance of the combination or of the generation component of the ensemble algorithm. More precisely, non-generative ensemble methods confine themselves to combine a set of possibly well-designed base classifiers: they do not actively generate new base learners but try to combine in a suitable way a set of existing base classifiers. On the contrary, generative ensemble methods generate sets of base learners acting on the base learning algorithm or on the structure of the data set to try to actively improve diversity and accuracy of the base learners. In this case the emphasis is placed on the way diverse base learners are constructed, while the combination technique does not represent the main issue of the ensemble algorithm. The main aim of this chapter is to explain the detailed characteristics of each ensemble methods and to provide an overview of the main application areas of ensemble methods. The rest of the chapter is organized as follows: the background section describes the related work. A brief description of ensemble methods reported in the literature, distinguishing between generative and non-generative methods, main application areas of ensemble methods is presented in section of main focus of the chapter. Finally, the chapter concludes with future research directions.

Key Terms in this Chapter

Generalization Error: Generalization error measures how a learning module performs on out of sample data. It is measured as the difference between the prediction of the module and the actual results.

Base Inducer(s) or Base Classifier(s): An inducer is a learning algorithm that is used to learn from a training set. A base inducer obtains a training set and constructs a classifier that generalizes relationship between the input features and the target outcome.

Generative Ensemble Methods: Generative ensemble methods generate sets of base learners acting on the base learning algorithm or on the structure of the data set to try to actively improve diversity and accuracy of the base learners.

Combiner: The task of the combiner is to produce the final decision by combining all classification results of the various base inducers.

Non-Generative Ensemble Methods: Non-generative ensemble methods confine themselves to combine a set of possibly well-designed base classifiers: they do not actively generate new base learners but try to combine in a suitable way a set of existing base classifiers.

Person Recognition: Person recognition is the problem of verifying the identity of a person using characteristics of that person, typically for security applications.

Complete Chapter List

Search this Book:
Reset