Deep Neural Network for Electromyography Signal Classification via Wearable Sensors

Deep Neural Network for Electromyography Signal Classification via Wearable Sensors

Ying Chang, Lan Wang, Lingjie Lin, Ming Liu
Copyright: © 2022 |Pages: 11
DOI: 10.4018/IJDST.307988
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The human-computer interaction has been widely used in many fields, such intelligent prosthetic control, sports medicine, rehabilitation medicine, and clinical medicine. It has gradually become a research focus of social scientists. In the field of intelligent prosthesis, sEMG signal has become the most widely used control signal source because it is easy to obtain. The off-line sEMG control intelligent prosthesis needs to recognize the gestures to execute associated action. In order solve this issue, this paper adopts a CNN plus BiLSTM to automatically extract sEMG features and recognize the gestures. The CNN plus BiLSTM can overcome the drawbacks in the manual feature extraction methods. The experimental results show that the proposed gesture recognition framework can extract overall gesture features, which can improve the recognition rate.
Article Preview
Top

Introduction

With the rapid development of intelligent computing technology, the human-computer interaction (HCI) (Sun et al. 2020) is playing a more and more important role in daily life. The HCI refers to the process of communication and conversation between people and computers (Klumpp et al. 2018). It is an observable two-direction information exchange process, which means both people and computer can be as the sender or receiver during communication. The interface in the communication between people and computer is called human-computer interface. The smart human-computer interface requires the computer can perceive human natural ability, such as touch, language, pen shape, posture and emotion. The traditional human-computer interaction, such as keyboard and mouse, has many limitations. The novel human-computer interaction technologies emerge in endlessly, among which human motion and gesture recognition (Chakraborty et al. 2018; Tsai et al. 2020) has become a challenging topic in the field of HCI.

The motion and gesture recognition (Wu & Jafari 2018) refers to a series of limb movements with rich information. It is a way to express people's further behavior intention or complete the information transmission between people and the environment. The motion and gesture recognition is the process of that the computer automatically detects, analyzes and understands all kinds of motion, gesture or human behavior to judge human intention and provide corresponding services. The motion and gesture recognition involves multi-disciplinary fields (Beddiar et al. 2020; Salah et al. 2010), such as pattern recognition, artificial intelligence, computer vision and sensor technology. It has a wide range of application prospects, including multimodal human-computer interaction, medical monitoring and somatosensory game. The common used human motions include: sign language and gesture movement, head and face movement and body movement.

The hand is the most flexible part of human limbs. A large number of complex human operations are completed by the hand. Deaf people communicate through sign language, and their information expression ability is similar to that of natural language. The sign language and gesture recognition (Almasre & Al-Nuaim 2020; Halim & Abbas 2015) is always a hot and difficult topic in the research of HCI.

In the head and face movement, the nodding and shaking head are the most direct and simple ways for people to express their views. Due to the concealment of execution, the blinking and eye movement also attract the interest of researchers in the field of human-computer interaction. The facial expressions such as surprise, happiness, depression, fear, anger and satisfaction, are important information to determine human's emotion.

The body movement involves the comprehensive movement state of the whole body. It includes static and dynamic postures. The static postures include standing, sitting and lying etc. The dynamic posture refers to people's movement states, such as walking, running, dancing and gait, which can be used in medical rehabilitation and athlete training.

The motion recognition for body movement includes static and dynamic information. The static information refers to the position and shape of limbs, such as standing, sitting and sleeping. The dynamic information refers to the movement of limbs, which contains the position and orientation of the limb movement, the timing of action change, the shape of the action and emotional expression. In order to effectively capture the information in human motion, it needs to accurately determine the beginning and end of each action. Sometimes the specific meaning of an action is affected by successive actions.

In order to effectively identify human actions, the action perception mode should be able to detect most of the information in the motion. Computer vision and motion measurement are the two main ways to perceive human motion. These two motion perception methods focus on the capture of motion information, user experience and applicability.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing