Implementation of Dynamic Gesture Interpretation of Sign Language for Impact on Hearing and Speech Impairment

Implementation of Dynamic Gesture Interpretation of Sign Language for Impact on Hearing and Speech Impairment

Judy Flavia B., Aarthi B., P. Charitharyan, Renuka Thanmai, Meghana Kesana
DOI: 10.4018/978-1-6684-6060-3.ch020
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Expression of languages is a basic survival skill to make conversations in this world. To convey thoughts and express themselves vocally and nonverbally, mankind relies on a variety of languages. Hearing-impaired people, on the other hand, are unable to communicate verbally with others. Because sign language communicates one's message through signs using fingers, arms, head, and body, as well as mannerisms, it has become the major means of nonverbal communication for the deaf. Sign recognition is an ideal step in communicating in this non-verbal communication. Much research on languages that signers use and study have been conducted based on their respective regional sign languages such as the American (ASL) and Indian (ISL). Identifying patterns in the nuances of these different non-verbal languages helps us understand what a person is trying to communicate via signs. Since not all gesticulations are universally centralized, a few of them are single-handed; a few others, particularly the majority, use both the hands (or a mix of both).
Chapter Preview
Top

Introduction

Sign Language is a huge assist in effectively maintaining the social construct of communication. Learning to sign accelerates emotional development, improves attentiveness, and promotes language development (Chen and Zhang, 2016). Understanding the deaf and speech-impaired community is of massive importance (Zhang et al., 2016). To share and hold a meaningful conversation when you do not know sign language is a hurdle we plan to provide a solution to. In this paper, we attempt to contribute a major tool to help us educate ourselves by implementing efficient gesture recognition and pattern recognition software using Artificial intelligence and python using an application with a camera to advocate the importance of accessibility in this society (Zheng et al., 2017).

Nowadays, we are blessed with various tools to help us solve complex problems with simple solutions. Including sign languages in your language skill set will not only make you exceptionally better than most people, as it enhances your cognitive and spatial transformation capabilities, but it will also equip you to handle situations better, making you socially aware and increasing your EQ (M. M. et al., 2019).

Hearing loss strikes almost 5% of the global or 466 million individuals. By 2050, it is anticipated that over 900 million individuals or one out of every ten people, will suffer from hearing impairment (Shenoy et al., 2018). During this apparent rise in disability statistics, we ought to forge a bridge to join the gaps that’ll continue to grow between educators and education-seeking individuals with hearing difficulties and hearing imparities (Bhagat et al., 2019). A model that integrates any generic sign language chosen by the regional sign language of choice into a refurbished model that allows two-way communication in real time between a learner and a teacher to aid both parties in better and quick communication, eliminating syntax and semantic difficulties by automatically generating sentence structure instantaneously is a huge step towards progression in the field of languages and artificially intelligent equipped knowledge representation (Nagendraswamy et al., 2016). A World Federation of the Dead study conducted in December 2021 shows that seventy-one countries have officially recognised sign language (Gangadia et al., 2020). The WFD calls for other countries to recognise their sign languages to ensure the inclusion of the deaf and hearing-impaired community (Zhao et al., 2021).

Suppose verbal languages and sign languages are bidirectionally translated (Safeel et al., 2020). In that case, it will remove comprehensive barricades in perception and interpretation, bringing us to the idea of integrating auto-generated captions specially fabricated from a sign language database in a video communication service platform. With the considerable rise of cashing in on telecommunication services over the COVID-19 pandemic, making an update to generate complete sentences instead of just word-to-word translation of signs makes it easy for the communicators to converse with ease while simultaneously also generating normal captions for the party which enables the respective option (Hein et al., 2021). A competent gesture interpreting compression codec application acts as a mediator between the hearing-impaired population and the general public, narrowing the communication gap (Jain et al., 2000).

Complete Chapter List

Search this Book:
Reset