A Vision-Based Framework for Spotting and Segmentation of Gesture-Based Assamese Characters Written in the Air

A Vision-Based Framework for Spotting and Segmentation of Gesture-Based Assamese Characters Written in the Air

Ananya Choudhury, Kandarpa Kumar Sarma
Copyright: © 2021 |Pages: 22
DOI: 10.4018/JITR.2021010105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The task of automatic gesture spotting and segmentation is challenging for determining the meaningful gesture patterns from continuous gesture-based character sequences. This paper proposes a vision-based automatic method that handles hand gesture spotting and segmentation of gestural characters embedded in a continuous character stream simultaneously, by employing a hybrid geometrical and statistical feature set. This framework shall form an important constituent of gesture-based character recognition (GBCR) systems, which has gained tremendous demand lately as assistive aids for overcoming the restraints faced by people with physical impairments. The performance of the proposed system is validated by taking into account the vowels and numerals of Assamese vocabulary. Another attribute to this proposed system is the implementation of an effective hand segmentation module, which enables it to tackle complex background settings.
Article Preview
Top

Introduction

Gestural languages are visual-spatial languages that use different means of expression (hand gestures, facial expressions etc.) for communication among differently-abled persons, most often people with hearing and speech impairments (Martins et al., 2015). Additionally, in human computer interaction (HCI), gestures can frame a corresponding methodology past conventional input gadgets, for e.g., keyboard, mouse, touch pad, joystick and so forth. Among the various forms of gestures, hand gestures are the most natural and convenient form of communication and interaction. Motion hand gestures, which are discernible hand movements in free space, can also empower the users to connect with computing devices in a more natural and intuitive way (Chen et al., 2013). The ability of any processing system to understand the meaning of these motion hand gestures is referred to as hand gesture recognition (HGR).

Gesture Based Character Recognition (GBCR) is a special attribute of HGR systems which have attained enormous importance in recent times as assistive aids for facilitating the needs of people with special necessities so that they can lead a life with greater ease. Particularly, such GBCR systems serve as a powerful mediator for communication among people having hearing and speech impairments. They also serve as a rehabilitative aid for people having motor impairments which prevent them from writing with pen on paper, or while using common human–machine interactive (HMI) interfaces (Leo et al., 2017). Also, such a system can support the blind or people with reduced sight who face hindrances while typing in the standard HMI devices. Further, they have the ability to function as smart controlling aid for entertainment, gaming and related applications. Despite the popularity of such GBCR systems, there are certain limitations which impose constraints on their unrestricted use. Gesture spotting and character segmentation is accepted to be a crucial factor that puts constraints on the performance of automatic GBCR systems. It deals with determining the start and end points of different character segments embedded in an overall character sequence, and hence extracting out the valid character segments. This task is challenging due to the presence of unpredictable and ambiguous connecting links called ligatures or movement epentheses (me) which occur in between valid character segments (Yang et al., 2009). These ligatures also vary with different users and their speed of articulation. So, the main focus of this work is to develop an efficient framework for gesture spotting and character segmentation, which is independent of users and their speed of articulation which shall form a major part of the overall GBCR system. Another important aspect is that the design of GBCR system in Indian languages has been reported in several researches. But in Assamese which is a major language in the North East India, no reported attempts have achieved reliable interpretation of GBCR system. Also, it is observed that the disability prevalence in Assam is around 1.54% of the total India’s populace (“Disabled Persons in India,” 2016). This emphasizes the requirement to develop Assamese GBCR system so that the differently-abled, especially the hearing impaired, can learn their native language and communicate in a more intuitive way. So, in the context of this work, we have considered gesture spotting and character segmentation for the vowels and numerals of Assamese vocabulary.

Thus, in the present work, a novel method for gesture spotting and character segmentation (GSCS) is formulated by employing a distinctive feature set which determines the terminal points of character segments, extracts these segments from the continuous sequence and hence models the transience in their to determine the legitimate character fragments. Here we have mainly focused on the variability of the gestural behavior in the context of regional language which differs from English language.

The paper is organized as follows. The first section discusses some significant aspects associated with GBCR systems along with related research done in this field. In the next section, we present the proposed system of GSCS with elaborate description of the individual processes involved. The following section is devoted to the detailed discussion of experimental results obtained, and finally last section concludes the work.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing