Article Preview
TopIntroduction
Human Activity Recognition (HAR) using a smartphone has widely been used in various applications such as smart living (Hsu et al, 2020; Barhoun et al, 2019), intelligent health care (Yang et al, 2020; Azam et al, 2017), and security system (Win & Thein, 2020), etc due to their capacity for processing, context-aware information, communication, and portability. This has made smartphones as a ubiquitous platform to recognize the human activity through sensory information available inbuilt in the smartphone (Wang et al, 2016). In general, the users’ activity information is collected using different sensors such as environmental, body-worn or wearable sensors, and smartphones (Ramanujam and Padmavathi, 2019). For instance, the environmental sensor (Roggen et al, 2012; Sisodia et al, 2019) tracks the motion and location of any human. The major disadvantage of this approach is that it is expensive and feasible to install only in a controlled living environment. On the other side, body-worn or wearable sensors (Attal et al, 2015) have been placed in different positions of the body such as chest, wrists, waist, and thighs. This approach captures a wide spectrum of information such as acceleration, movement speed, orientation, etc and therefore obtains better recognition performance. However, the users feel discomfort in wearing the device throughout the day and sometimes they may forget to charge the devices. In addition, proper placement of sensors in the human body makes a remarkable change in the performance of the wearable approach. Further, the device requires to be tightly attached with the human body for better performance. Hence, this approach doesn’t provide a complete solution for HAR.
Recently, the smartphone has empowered a lot as an alternate sensing device to deal with the limitations of environmental and body-worn sensors (Subasi et al, 2020). The users’ information is collected through embedded inertial sensors such as accelerometer, gyroscope, and magnetometers. This sensory information can be used to recognize multiple physical activities and postural transitions through machine learning algorithms (Bao et al, 2004). The usage of a smartphone on HAR has provided various advantages over other sensory devices such as:
- 1.
Users’ are free to roam anywhere and no restriction to be in the controlled living environment.
- 2.
No compulsion to wear the device or sensors tightly attached to the body all over the day for monitoring.
- 3.
Users already have more familiarity with the usage of smartphones and its installed applications.
- 4.
Computing characteristics and the possibility of collecting the data and distribution of data is an easier task for monitoring.
- 5.
Easily portable and light weighted device.
However, there are certain drawbacks in smartphone based technique that requires placing the device in specific axis and orientation while recording user activities (Vavoulas et al, 2016). This has been overcome by using advanced smartphones which records the activities of the person by just keeping the mobile in his trouser packets itself (Asim et al, 2020).
The smartphone based HAR techniques have been widely explored in state-of-the-art techniques as described in the review paper (Hernandez et al, 2020) that uses traditional machine learning algorithms for the classification of activities. The proposed techniques have recognized the activities such as walking, jumping, jogging, sitting and sleeping through the data collected from inertial sensors. The classification models have been validated and evaluated using several evaluation measures (described later in this paper) such as Accuracy, Precision, Recall, and F-measure.