Article Preview
Top1. Introduction
A fundamental area of study in computer vision, deep learning, artificial intelligence, etc. is object detection. More difficult computer vision tasks, like target tracking, event detection, behavior analysis, and scene semantic understanding, require it as a necessary prerequisite. In order to create an effective object recognition system—in this case, face recognition—an attempt was made to integrate the human cognition principle. Numerous researchers have utilized decision tree-based classification approaches in a variety of fields, such as facial expression recognition (Salmam et al., 2016), group-based study to identify localized melanoma patients (Tsai et al., 2007), protein classification (Pepik et al., 2009), occlusions detection (Karthigayani & Sridhar, 2014), rule extraction for security analysis (Ren et al., 2006), gender classification (Khan et al., 2013), and hybrid classification based on decision tree and naïve bays methods (Muqasqas et al., 2014). In order to determine the most accurate class description for a given object, the decision tree concept takes into account the hierarchy of classes, just like humans do. This hierarchy starts at the top level of super-class hierarchy and moves down to the final hierarchy. However, there are very few classes at the base of the hierarchy. which makes it difficult for any classifier, including a human one, to handle due to its large number; as a result, the classes at this hierarchy must be grouped into a few clusters. For the classifier used to classify data into these super classes at this hierarchy, each of these clusters of classes can be thought of as a newly defined super class at the previous level of hierarchy. Similar methods have been applied to optical character recognition (Wilson et al., 1996) and face image recognition (Ebrahimpour et al., 2005) to increase classification accuracy. Using Fig. 1, the explanation that follows will help you understand this better. The majority of decision tree building techniques are based on decision clustering. Typically, decision trees are designed to handle a large number of classes. This method, of course, meets the needs of the majority of classifiers, most of which are unable to handle tasks involving a large number of classes. Figure 1 depicts a fictitious example of ten real classes, let's say c1, c2, c3,... c10. In this case, at level 1 the original classes are merged to form super class’s χ1, χ2 and χ3 for which classifier 1 is employed to get decision, d1 for class χ3 at level 1. Next, for this decision as input, classifier 4 is invoked to reach the final decision d2 for class c3. The flow of decision is as follows:
Figure 1. The process of making decision using “decision clustering tree” (common approach): Here input-1 is classified by root level classifier-1to superclass-χ3, where classifier-4 is employed, which classifies its actual class: class-3