An Overview for Non-Negative Matrix Factorization

An Overview for Non-Negative Matrix Factorization

Yu-Jin Zhang
Copyright: © 2015 |Pages: 11
DOI: 10.4018/978-1-4666-5888-2.ch156
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Chapter Preview

Top

Introduction

A basically important problem in Image Engineering (IE), Pattern Recognition (PR), and Computer Vision (CV) is how to find a suitable representation of multivariate data.

In many cases, the primitive data sets or observations are organized as data matrices (or tensors), and described by linear (or multi-linear) combination models. From the algebraic perspective, the formulation of dimensionality reduction can be regarded as decomposing the original data matrix into two factor matrices. The canonical methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), and vector quantization (VQ) et al., are the exemplars of such low-rank approximations. They differ from one another in the statistical properties attributable to the different constraints imposed on the component matrices and their underlying structures; however, they have something in common that there is no constraint in the sign of the elements in the factorized matrices. In other words, the negative component or the subtractive combination is allowed in the representation.

In contrast, a new paradigm of factorization — Non-negative Matrix Factorization (NMF) is quite different in this aspect. NMF is a recently developed, biologically inspired method for nonlinearly finding purely additive, sparse, linear, and low-dimension representations of non-negative multivariate data to consequently make latent structure, feature or pattern in the data clear (Lee, 1999).

NMF makes all representation components non-negative (only purely additive representations are allowable) and nonlinearly implements dimension reduction. Psychological and physiological evidence for NMF is that perception of the whole is based on perception of its parts, which is compatible with the intuitive notion of combining parts to form a whole (Lee, 1999), therefore it is considered to grasp the essence of intelligent or biological data representation in some degree.

Far beyond a mathematical exploration, the philosophy underlying NMF, which tries to formulate a feasible model for learning object parts, is closely relevant to perception mechanism. While the parts-based representation seems intuitive, it is indeed based on physiological and psychological evidence: perception of the whole is based on perception of its parts (Paatero, 1997). In fact there are two complementary connotations in non-negativity — non-negative component and purely additive combination. On the one hand, the negative values of both observations and latent components are physically meaningless in many kinds of real world data, such as image analysis tasks. Meanwhile, the discovered prototypes commonly correspond with certain semantic interpretation.

Besides, NMF usually produces a sparse representation of data, which has been shown to be a useful middle ground between a completely distributed representation and a unary representation (Field 1994). The non-negativity constraint will lead to sort of sparseness naturally (Lee, 1999), which is proved to be a highly effective representation distinguished from both the completely distributed and the solely active component description (Field, 1994).

When NMF is interpreted as a neural-network learning algorithm depicting how the visible variables are generated from the hidden ones, the parts-based representation is obtained from the additive model. A positive number indicates the presence and a zero value represents the absence of some event or component. This conforms nicely to the dualistic properties of neural activity and synaptic strengths in neurophysiology: either excitatory or inhibitory without changing sign (Lee, 1999).

Key Terms in this Chapter

Independent Component Analysis (ICA): A mathematical procedure that finds the independent components by maximizing the statistical independence of the estimated components. It can be used for separating a multivariate signal into additive subcomponents by assuming that the subcomponents are non-Gaussian signals and that they are all statistically independent of each other.

Pattern Recognition (PR): A broad discipline for assigning a label to a given input pattern (such as images). Pattern recognition can be performed in supervised manner or unsupervised manner, and recently in semi-supervised manner. Typical approaches for pattern recognition can be classified as statistic approach, synthetic approach, and fuzzy approach.

Face Recognition (FR): Is concerning to recognize faces from images or video. It consists of several stages: face detection (finding a face from images), feature extraction (obtain representative features of faces), feature comparison (matching detected face to faces in database), and verification (finding the detected face in pre-determined set) or identification (finding the identity of detected face).

Principal Component Analysis (PCA): A mathematical procedure that uses an orthogonal transformation (by computing the eigenvectors and eigenvalues of the variance/covariance matrix) to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.

Vector Quantization (VQ): A typical quantization technique that models the probability density function according to the distribution of prototype vectors. It divides a large set of points (vectors) into clusters having approximately the same number of points closest to them. Each cluster is represented by its centroid point. In such a way, it can identify the density of large and high-dimensional data (images).

Linear Discriminant Analysis (LDA): A mathematical procedure that attempts to model the difference between the classes of data to characterize or separate two or more classes of objects or events. It finds a linear combination of features that best explain the data for such a purpose.

Complete Chapter List

Search this Book:
Reset