Machine Learning Approach for Robot Navigation Using Motor Imagery Signals

Machine Learning Approach for Robot Navigation Using Motor Imagery Signals

Pratyay Das, Amit Kumar Shankar, Ahona Ghosh, Sriparna Saha
DOI: 10.4018/978-1-6684-9999-3.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Electroencephalography (EEG) signals have been used for different healthcare applications like motor and cognitive rehabilitation. In this study, motor imagery data of different subjects' rest vs. movement and different movements is categorized from a publicly available dataset. The authors have first applied a lowpass filter to the EEG signals to reduce noise and a fast fourier transform analysis to extract features from the filtered data. Utilizing principal component analysis, relevant features are selected. With an accuracy of 95.02%, they have classified rest vs. movement using the k-nearest neighbor algorithm. Using the random forest algorithm, they have classified various movement types with an accuracy of 96.45%. The success in differentiating between movement and rest raises the possibility that EEG signals can recognize a user's intention to move. Accurately classifying different movement types opens the possibility of navigating robots accordingly in the real-time scenario for people with motor disabilities to assist them with robotic arms and prosthetic limbs.
Chapter Preview
Top

Introduction

Motor imagery (MI) is the process of performing an action in the brain, but not physically in real time. MI signals recorded via electroencephalograms (EEGs) are the most convenient basis for designing Brain-Computer Interfacing (BCI) as they provide a high degree of freedom. MI-based BCIs help motor disabled people to interact with any real-time BCI applications by performing a sequence of MI tasks. BCI also enables robotic movements to be controlled in an entirely effortless manner by utilizing only the signals produced by the user's brain (Gul et al., 2019). This is the aim of research in robot navigation from MI signals. For the robots to be navigated and carry out complex tasks depends on the brain's capacity to produce electrical signals reacting to envisioned movements (Ghosh & Saha, 2020).

Robot navigation employing MI signals has many potential uses (Bag et al., 2022). For instance, this technology would allow people with physical limitations to direct a robot to carry out duties like picking up and arranging goods or finding specified areas. This strategy may also lessen the technical effort needed to control a robot, making it more straightforward for unskilled users to use MI signals (Guillot & Debarnot, 2019). Experiments are being conducted in many fields, such as mining operations, surveillance, and exploration, all involving strenuous activities requiring extensive data analysis and judgment. As a result, Artificial Intelligence (AI) can significantly outperform human operators (Saha & Ghosh, 2019). AI can be used to execute more accurately and efficiently, leading to more successful and economical missions (Saha et al., 2018). Also, using AI under challenging circumstances can decrease the risk to human operators, making these operations safer and more practical. Supervised Machine Learning (ML) methods like Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Random Forest (RF), and Deep Learning (DL) approaches like Convolutional Neural Network (CNN), Long Short Term Memory (LSTM) have been observed to work efficiently in the recent literature of this domain. The world is facing important trends associated with an increase of disability in populations, especially a rise in noncommunicable diseases (NCDs), including mental health conditions, and the rapid ageing of the world population. Estimates from the WHO World report on disability show that 15% of the global population experience significant disability (World Health Organization, 2019). The motivations behind developing the current system are to assist persons with disabilities in communicating, operating computers, and using assistive technology like wheelchairs or robotic arms.

The first stage of the proposed work is to remove artifacts and undesirable signals from the EEG signals using filtering, as without appropriate cleaning, the outcomes could be untrustworthy. Extracting features pertinent to the specific job is the next step in the feature engineering process. These characteristics subsequently build a classifier to distinguish between the brain signals connected to MI. This research builds on earlier works by developing two models rather than just one to solve existing limitations (de Klerk et al., 2019). The first model's goal is to foretell whether the user is at rest or intends to move, whereas the second model's goal is to predict the specific movement the user intends to make (Ofner et al., 2019). By developing these models independently, we intend to increase the prediction accuracy by making it more usable in real-world circumstances. Also, creating two models gives the system's overall architecture more design freedom (Schuster et al., 2011). The contributions of the current work are summarized as follows

Complete Chapter List

Search this Book:
Reset