Ecole d'ingénieur et centre de recherche en Sciences du numérique

Multi-view dimensionality reduction for multi-modal biometrics

Zaho, Xuran


Biometric data is often represented by high-dimensional feature vectors which contain significant inter-session variation. Efficient dimensionality reduction techniques are thus needed in order to extract class-discriminative, low-dimensional features and to attenuate unwanted variation which is redundant to recognition. Such discriminative dimensionality reduction techniques generally follow a supervised learning scheme, in which a subspace projection is learned with feature-label pairs. However, labelled training data is generally limited in quantity and often does not reliably represent the inter-session variation encountered in test data. The limited size of labelled training sets often leads to biased projection matrices and degraded recognition performance. This thesis proposes novel multi-view dimensionality reduction (MVDR) approaches which aim to extract discriminative features in multi-modal biometric systems, where different modalities are regarded as different views of the same data. Instead of training on feature-label pairs, MVDR projections are trained on feature-feature pairs where label information is not required. Since unlabelled data is easier to acquire in large quantities, and because of the natural co-existence of multiple views in multi-modal biometric problems, discriminant, low-dimensional subspaces can be learnt using the proposed MVDR approaches in a largely unsupervised manner. According to different biometric system applications, namely recognition (including identification and verification), clustering, and retrieval, we propose three MVDR frameworks which meet the requirements for each functionality. The proposed approaches, however, share the same spirit: all methods aim to learn a projection for each view such that a certain form of agreement is attained in the subspaces estimated for each different view. The proposed MVDR frameworks can thus be unified into one general framework for multi-view dimensionality reduction through subspace agreement. We regard this novel concept of subspace agreement to be the primary contribution of this thesis.

Document Hal Bibtex

Titre:Multi-view dimensionality reduction for multi-modal biometrics
Département:Sécurité numérique
Eurecom ref:4149
Copyright: © TELECOM ParisTech. Personal use of this material is permitted. The definitive version of this paper was published in Thesis and is available at :
Bibtex: @phdthesis{EURECOM+4149, year = {2013}, title = {{M}ulti-view dimensionality reduction for multi-modal biometrics}, author = {{Z}aho, {X}uran}, school = {{T}hesis}, month = {10}, url = {} }
Voir aussi: