Use of “Odds” in Bayesian Classifiers

Use of “Odds” in Bayesian Classifiers

Bhushan Kapoor, Sinjini Mitra
Copyright: © 2023 |Pages: 14
DOI: 10.4018/978-1-7998-9220-5.ch162
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Odds, log odds, and odds ratio concepts can be effectively applied in several machine learning algorithms and model evaluations. The use of these concepts has potential to make the algorithms simple, easy to interpret, and computationally more efficient. However, their implementation among the machine learning professional community has been concentrated mainly in the context of logistic regression. In this article, the authors discuss how odds, odds ratio, and log odds can be used in Bayes' theorem and multinomial naïve Bayes' classifiers. The authors will reformulate Bayes' theorem and multinomial naïve Bayes' classifiers in terms of “odds” and illustrate their applications with examples dealing with “loan application” approval.
Chapter Preview
Top

Background

Like probability, ‘odds’ is a measure of the likelihood of an event. But these are defined differently in statistics and have different properties (Lawrence, Francis, Nathaniel & Muzaffer, 2012; Martin, 2021; Ranganathan, Aggarwal & Pramesh, 2015). The probability of an event, E (written here as P(E)) is defined as a real number that always lies between 0 and 1, and it is estimated as the number of times the event occurs over the total number of random trials or examples. When there are only two outcomes, we can use odds instead of the probability of an event. We represent the odds of an event as O(E). And it is defined as follows:

978-1-7998-9220-5.ch162.m01
(1) where -E is the complement event of E.

O(E) can assume any real number between 0 and infinity. O(E) = 1 means that the chances of event E happening or not happening are equal. When O(E) is greater than 1, it means that the chance of occurrence of event E is higher than its non-occurrence. And, when O(E) is less than 1, it means that the chance of occurrence event E is lower than its non-occurrence.

Key Terms in this Chapter

Supervised Learning: In supervised learning models (at their development stages) are provided with data/ examples on both input (predicator variables) and output (category) labels.

Posterior Odds: Posterior odds of an event is the odds we estimate for this event after we collect data/ examples and make use of the relevant information contained in the data.

Odds: Odds is a measure of the likelihood of an event. It assumes any value between 1 and infinity.

Prior Odds: Prior odds of an event is the odds we estimate for this event before data/ examples are collected.

Complete Chapter List

Search this Book:
Reset