Guide Manifold Alignment by Relative Comparisons

Guide Manifold Alignment by Relative Comparisons

Liang Xiong
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-60566-010-3.ch148
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

When we are faced with data, one common task is to learn the correspondence relationship between different data sets. More concretely, by learning data correspondence, samples that share similar intrinsic parameters, which are often hard to estimate directly, can be discovered. For example, given some face image data, an alignment algorithm is able to find images of two different persons with similar poses or expressions. We call this technique the alignment of data. Besides its usage in data analysis and visualization, this problem also has wide potential applications in various fields. For instance, in facial expression recognition, one may have a set of standard labeled images with known expressions, such as happiness, sadness, surprise, anger and fear, of a particular person. Then we can recognize the expressions of another person just by aligning his/her facial images to the standard image set. Its application can also be found directly in pose estimation. One can refer to (Ham, Lee & Saul, 2005) for more details. Although intuitive, without any premise this alignment problem can be very difficult. Usually, the samples are distributed in high-dimensional observation spaces, and the relation between features and samples’ intrinsic parameters can be too complex to be modeled explicitly. Therefore, some hypotheses about the data distribution are made. In the recent years, the manifold assumption of data distribution has been very popular in the field of data mining and machine learning. Researchers have realized that in many applications the samples of interest are actually confined to particular subspaces embedded in the high-dimensional feature space (Seung & Lee, 2000; Roweis & Saul, 2000). Intuitively, the manifold assumption means that certain groups of samples are lying in a non-linear low-dimensional subspace embedded in the observation space. This assumption has been verified to play an important role in human perception (Seung & Lee, 2000), and many effective algorithms are developed under it in the recent years. Under the manifold assumption, structural information of data can be utilized to facilitate the alignment.
Chapter Preview
Top

Introduction

When we are faced with data, one common task is to learn the correspondence relationship between different data sets. More concretely, by learning data correspondence, samples that share similar intrinsic parameters, which are often hard to estimate directly, can be discovered. For example, given some face image data, an alignment algorithm is able to find images of two different persons with similar poses or expressions. We call this technique the alignment of data. Besides its usage in data analysis and visualization, this problem also has wide potential applications in various fields. For instance, in facial expression recognition, one may have a set of standard labeled images with known expressions, such as happiness, sadness, surprise, anger and fear, of a particular person. Then we can recognize the expressions of another person just by aligning his/her facial images to the standard image set. Its application can also be found directly in pose estimation. One can refer to (Ham, Lee & Saul, 2005) for more details.

Although intuitive, without any premise this alignment problem can be very difficult. Usually, the samples are distributed in high-dimensional observation spaces, and the relation between features and samples’ intrinsic parameters can be too complex to be modeled explicitly. Therefore, some hypotheses about the data distribution are made. In the recent years, the manifold assumption of data distribution has been very popular in the field of data mining and machine learning. Researchers have realized that in many applications the samples of interest are actually confined to particular subspaces embedded in the high-dimensional feature space (Seung & Lee, 2000; Roweis & Saul, 2000). Intuitively, the manifold assumption means that certain groups of samples are lying in a non-linear low-dimensional subspace embedded in the observation space. This assumption has been verified to play an important role in human perception (Seung & Lee, 2000), and many effective algorithms are developed under it in the recent years. Under the manifold assumption, structural information of data can be utilized to facilitate the alignment. Figure 1 illustrates two 1-D manifolds embedded in a 2-D plane.

Figure 1.

Two 1-D manifolds embedded in a 2-D plane

978-1-60566-010-3.ch148.f01

Besides, since we do not know the relationship between observations and the underlying parameters, additional supervisions are needed to guide the aligning process. Usually, we will require information about a subset of samples and the structure of the whole data set in order to infer about the rest of samples. Thus, the alignment is often done in a semi-supervised way. The most common supervision used is pairwise correspondence, which specifies two samples that share the same parameters as shown in figure 2.

Figure 2.

Guide the alignment of two manifolds by pairwise correspondence. A black line represents the supervision which indicates two samples with the same underlying parameters.

978-1-60566-010-3.ch148.f02

Take the facial expression alignment for example. In this task we are given face images of two different persons A and B, and the problem is to find two images, one of which is from A and the other is from B, with similar facial expression. Template matching in the feature space is not feasible here since different faces may have very different appearances. At present, directly estimating the expression parameters is also difficult because of the limitation of our knowledge and the variability of data. So we assume that images from the same person are lying on a low-dimensional manifold. Now the problem seems easier because we are now dealing with two structures, such as two curves, instead of discrete points. However, we still do not know how these two structures should be aligned. Then supervisions are needed to tell us how they correspond.

Complete Chapter List

Search this Book:
Reset