Image Recognition and Extraction on Computerized Vision for Sign Language Decoding

Image Recognition and Extraction on Computerized Vision for Sign Language Decoding

M. Gandhi, C. Satheesh, Edwin Shalom Soji, M. Saranya, S. Suman Rajest, Sudheer Kumar Kothuru
Copyright: © 2024 |Pages: 17
DOI: 10.4018/979-8-3693-1355-8.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The image recognition method is a significant process in addressing contemporary global issues. Numerous image detection, analysis, and classification strategies are readily available, but the distinctions between these approaches remain somewhat obscure. Therefore, it is essential to clarify the differences between these techniques and subject them to rigorous analysis. This study utilizes a dataset comprising standard American Sign Language (ASL) and Indian Sign Language (ISL) hand gestures captured under various environmental conditions. The primary objective is to accurately recognize and classify these hand gestures based on their meanings, aiming for the highest achievable accuracy. A novel method for achieving this goal is proposed and compared with widely recognized models. Various pre-processing techniques are employed, including principal component analysis and histogram of gradients. The principal model incorporates Canny edge detection, Oriented FAST and Rotated BRIEF (ORB), and the bag of words technique. The dataset includes images of the 26 alphabetical signs captured from different angles. The collected data is subjected to classification using Support Vector Machines to yield valid results. The results indicate that the proposed model exhibits significantly higher efficiency than existing models.
Chapter Preview
Top

Introduction

Sign language is a remarkable form of communication, serving as the primary natural language for the deaf and mute community (Angeline et al., 2023). Its unique attributes lie in its expressive and distinctive means of facilitating interaction in everyday life (Aravind et al., 2023). To truly appreciate the significance of sign language, it’s essential to delve into gestures’ profound role in human communication (Bose et al., 2023). One fundamental component of sign language is recognizing hand postures (Gomathy & Venkatasbramanian, 2023). Hand and arm gestures are the building blocks of sign language, where specific configurations of fingers and hands convey meaning (Rajest et al., 2023a; (Regin et al., 2023c). The precise arrangement of fingers, including flexion and extension, can completely alter the meaning of a sign. For instance, the difference between the ASL signs for “mother” and “father” primarily relies on the positioning of the thumb relative to the chin (Rajest et al., 2023b). Sign language is not confined to hand and arm movements; it encompasses the entire body. Body gestures include full-body movements, such as tracking the motion of two people outdoors or analyzing the graceful steps of a dancer (Regin et al., 2023a). These gestures allow sign language users to convey spatial and directional information effectively (Regin et al., 2023b). For instance, describing a car accident might involve using body gestures to indicate the direction of impact and the vehicles’ positions (Joshi et al., 2023).

Moreover, the richness of sign language is further augmented by head and facial gestures. These subtleties involve nodding or shaking the head, the angle of the eyeball, eyebrow movements, and non-vocal mouth expressions (Nallathambi et al., 2022). These cues provide additional context and emotional depth to the communication. For instance, a slight raise of the eyebrows can change a statement into a question in sign language, just as it does in spoken language (Nithyanantham, 2023). Sign language is not a random collection of gestures; it is a structured language with its morphology, grammar, phonology, and syntax. Each sign carries a specific meaning, and the arrangement of signs in a sentence follows grammatical rules. This structure enables sign language users to effectively convey complex thoughts and ideas (Ogunmola et al., 2022). For instance, ASL has its own grammar rules, including subject-verb-object order, and uses facial expressions to indicate tense and mood (Saleh et al., 2022).

This structured nature of sign language allows for the precise and nuanced expression of ideas and emotions (Sharma et al., 2021a). It provides a means for deaf and mute individuals to engage in various conversations, from everyday interactions to deep philosophical discussions. Sign language is not merely a gestural communication system; it is a fully developed and rich language that rivals spoken languages in complexity and depth. Sign language has evolved within deaf and mute communities, adapting to its users’ changing needs and preferences (Sengupta et al., 2023). This evolution has made sign language a dynamic and living language capable of expressing the full spectrum of human experiences (Sharma et al., 2021b). New signs are created as new concepts emerge, and the language continues to adapt to contemporary developments (Obaid et al., 2023).

Complete Chapter List

Search this Book:
Reset