Learning Deep Features for Hierarchical Classification of Mobile Phone Face Datasets in Heterogeneous Environments

Thirimachos Bourlai
West Virginia University (MILab), West Virginia, United States

Keywords: Biomertrics, Face Recognition, Mobile Application

The poster presents a convolutional neural network (CNN) based, scenario-dependent and sensor (mobile device) adaptable hierarchical classification framework [1]. Our proposed framework is designed to automatically categorize face data captured under various challenging conditions, before the FR algorithms (pre-processing, feature extraction and matching) are used. First, a unique multi-phone database is collected containing face images indoors, outdoors, multi-pose and distance (up to 10m) etc. To cope with pose variations, face detection and pose estimation algorithms are used for classifying the facial images into a frontal or a non-frontal class. Next, our proposed framework is used where tri-level hierarchical classification is performed as follows: Level 1, face images are classified based on phone type; Level 2, face images are further classified into indoor and outdoor images; and finally, Level 3 face images are classified into a close (1m) and a far, low quality, (10m) distance categories respectively. Experimental results show that classification accuracy can reach 99%. [1] N. Narang, M. Martin, D. Metaxas (Rutgers), T. Bourlai, “Learning Deep Features for Hierarchical Classification of Mobile Phone Face Datasets in Heterogeneous Environments”, 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), June 2017.