An Introduction to Principal Component Analysis and Its Applications

An Introduction to Principal Component Analysis and Its Applications

Vaibhav Chaudhari, Ankur Dumka
DOI: 10.4018/978-1-6684-5849-5.ch017
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Huge datasets are progressively normal and are frequently hard to decipher. Principle component analysis (PCA) is used for reducing dimensions of huge datasets and thus used in expanding interpretability by minimizing information loss. This is achieved by making uncorrelated variables that successively maximize variance. PCA finds eigen values/eigen vectors, and the new factors are characterized by the current dataset, and thus making PCA a versatile data analysis technique. This chapter will focus on explaining the working of PCA with various mathematical proofs and derivations along with discussion on the advantages and disadvantages of it. This chapter lucidly explains the working of PCA in detail along with the various mathematical proofs and derivations.
Chapter Preview
Top

Literature Review

Principal components Analysis is one of the component which is used to reduce the dimension of data points by means of projecting each data points into few principal components in order to get low-dimensional data while conserving the variation of data. PCA finds its applications in many places. Neural Network

Bartecki K. (2012) uses PCA for neural network. In his research, he uses various neural network implementations and algorithms for PCA and its various extensions, including PCA, MCA, generalized EVD, constrained PCA, two-dimensional methods, localized methods, complex-domain methods, and SVD. He uses and explain the related components of PCA, that is Eigenvalue decomposition (EVD) and singular value decomposition (SVD) for his research on neural network. Minor component analysis (MCA) which is a variant of PCA for solving total least squares (TLSs) problems.

Wang M. et al. (2015) discusses nonlinear principal component analysis (NL-PCA) network which is design for coversion of high-dimensional spatiotemporal data into a low-dimensional time domain which is used to represent nonlinearity of the system compared to the linear time/space separation method. Then the hybrid NN models are built to identify the low-dimensional temporal data.

Face Recognition

Dakui W. et. al. (2013) discusses about the application of PCA in face recognition by means of integrating data field and PCA. In this approach firstly the features are extracted from facial pictures from data field and then recognition of faces is done by means of PCA.

Zheng et. Al. (2005) develop genetic algorithm PCA (GA-PCA)in order to select the eigen vector that can be used in LDA using genetic algorithm. GA-PCA algorithm used to reduce the dimension. They also design GA-Fisher method which optimize bases for dimensional reduction derived from GA-PCA and also improves the computational efficiency of LDA by adding whitening process after dimension reduction.

Liu C. (2004) discusses application of PCA for face detection. He proposes Gabor-based kernel principal component analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Where, Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance.

Complete Chapter List

Search this Book:
Reset