Gait Recognition Using Deep Learning

Gait Recognition Using Deep Learning

Chaoran Liu, Wei Qi Yan
Copyright: © 2020 |Pages: 13
DOI: 10.4018/978-1-7998-2701-6.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Gait recognition mainly uses different postures of each individual to perform identity authentication. In the existing methods, the full-cycle gait images are used for feature extraction, but there are problems such as occlusion and frame loss in the actual scene. It is not easy to obtain a full-cycle gait image. Therefore, how to construct a highly efficient gait recognition algorithm framework based on a small number of gait images to improve the efficiency and accuracy of recognition has become the focus of gait recognition research. In this chapter, deep neural network CRBM+FC is created. Based on the characteristics of Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG) fusion, a method of learning gait recognition from GEI to output is proposed. A brand-new gait recognition algorithm based on layered fu-sion of LBP and HOG is proposed. This chapter also proposes a feature learning network, which uses an unsupervised convolutionally constrained Boltzmann machine to train the Gait Energy Images (GEI).
Chapter Preview
Top

In general, gait recognition refers to pedestrian recognition (Sarkar, et al. 2005) (Shiraga, et al. 2016), which utilizes the features extracted from the pedestrian silhouette map to identify a person (Lam, et al. 2011) (Murase & Sakai, 1996)(Wang, et al. 2004). In recent years, with the development of deep learning, such as Mask Region-Based Convolutional Neural Network (Mask-RCNN), it is possible to apply gait recognition to practical complex scenes (He, et al. 2017) (Lee & Grimson, 2002).

Different from directly using the gait silhouette as an input to the deep neural network, Shiraga applied Gait Energy Images (GEI) as the input feature (Tao & Maybank, 2007). GEI is a gait model of static and dynamic information in a sequence of mixed gait silhouettes. The energy of each pixel in the model is obtained by calculating the average intensity of the silhouette pixels in a gait cycle.

The LBP is based on the values of grayscale image pixels. The HOG mainly uses gradient size and direction of the pixels (Dalal & Triggs, 2005). Therefore, after the first round of operations, there is still an image having grayscale intensity changes. In order to obtain pretty rich and useful texture information and edge shape information from the grayscale images, a hierarchical LBP and HOG can be generated in Figure 2.

Complete Chapter List

Search this Book:
Reset