The Role of Digital Twin Technology in Engagement Detection of Learners in Online Learning Platforms

The Role of Digital Twin Technology in Engagement Detection of Learners in Online Learning Platforms

T. Y. J. Naga Malleswari, S. Ushasukhanya
DOI: 10.4018/979-8-3693-1818-8.ch018
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

During and after the pandemic, online learning has been a part of various educational activities. Online educators must precisely detect the learner's engagement to provide pedagogical support. “Student engagement” refers to how much students participate intellectually and emotionally in their classwork and must be evaluated. Defining a straightforward procedure for assessing and comprehending patterns in engagement measurement can improve the figures significantly. Digital twin technology has become the centre of attention in many industries, such as manufacturing, academia, etc. This chapter presents a comprehensive analysis of all the previous approaches to quantify the degree of user involvement and the role of digital twin technology in online learner engagement. More concrete methods, such as multimodal methods, have been combined with abstract methods, such as simple face expression identification on the real-time data set. It also presents how the digital twin models are utilized to accelerate models' efficiency in various sectors of artificial intelligence applications.
Chapter Preview
Top

Ii. Groundwork

Dataset Collection

Having a sizable dataset is necessary for face and emotion recognition. The dataset needs to be substantial enough to train a model that can identify every visual emotion. A fresh or pre-existing collection could serve as the dataset's basis. Figure 1 is an illustration of an emotion dataset.

Dataset Pre-Processing

The ability to identify the emotion being sent by a face in an image by using only the central facial features—such as the nose, eyes, and mouth—is a significant advancement in the classification of emotions. This is because the face's primary features are all that are required for it to function in this way. As a result, a variety of methods and algorithms are employed to discover faces inside the image. Figure 1. Shows the multiple emotions of the humans in faces.

Figure 1.

Dataset with multiple emotions sample

979-8-3693-1818-8.ch018.f01

Feature Extraction

Extraction of feature points is required for face detection. There are numerous techniques for extracting feature points. Examples include Linear Discriminant Analysis (LDA), Scale Invariant Feature Transform (SIFT), Moments Speeded-Up Robust Features (SURF), and Gabor wavelets. Gabor Wavelets are reliable photometric measurers of feature vectors. SURF concentrates on a few core concepts.

Classification

When categorising emotions, think about “natural” and “random” classifiers. The most widely used random classifiers are KNN, SVM, and Random Forest. Based on visual qualities, these techniques classify emotions including fear, rage, surprise, disgust, happiness, sorrow, and neutrality.

Complete Chapter List

Search this Book:
Reset