Harry Z. Hui

I have moved to new place, and will not update this page...

Master of Science

Department of Electrical and Computer Engineering, Carnegie Mellon University
B200 Wing, Hamerschlag Hall, CMU, 5000 Forbes Avenue, Pittsburgh, PA 15213
 

Bachelor of Engineering


Department of Electronic and Computer Engineering, The Hong Kong Polytechnic University

 

Email: zhui AT andrew.cmu.edu

My Curriculum Vitae is available here: [PDF]

Biography

I received the B.Eng. degree (First Hons) from the Department of Electronic & Information Engineering Hong Kong Polytechnic University (HKPolyU) in 2011, and then I was doing half year Reseach Assistant with Prof. Kenneth K.M Lam.

.

Later I took a two-year M.Sc. study at the Department of Electronic & Computer Engineering, Carnegie Mellon University (CMU) When I was admitted in CMU, I served as a student assistant under Prof. Fernando De la Torre at the Human Sensing Lab, Robotics Institute, Carnegie Mellon University during Jan. 2012 and Oct. 2012. During Sep. 2012 to Now, I conducted research with Dr. Joy Zhang on my master project. I was co-supervised by Dr. Byron Yu.

My research interests mainly focus on graph/manifold embedded model to solve problems in common pattern discovery, face recognition and image restoration.

I am the awardee of  the 2009-2010/2011-2012 Best GPA award and the Technical Excellence Award in Honor Project

Educational Background

Selected Publications

Journal Papers

Conference Papers

Selected Projects

An Empirical Study of Dimensional Reduction Techniques for Facial Action Units Detection

Biologically inspired features, such as Gabor filters, result in very high dimensional measurement. Does reducing the dimensionality of the feature space afford advantages beyond computational efficiency? Do some approaches to dimensionality reduction (DR) yield improved action unit detection? To answer these questions, we compared DR approaches in two relatively large databases of spontaneous facial behavior (45 participants in total with over 2 minutes of FACS-coded video per participant). Facial features were tracked and aligned using active appearance models (AAM). SIFT and Gabor features were extracted from local facial regions. We compared linear (PCA and KPCA), manifold (LPP and LLE), supervised (LDA and KDA) and hybrid approaches (LSDA) to DR with respect to AU detection. For further comparison, a no-DR control condition was included as well. Linear support vector machine classifiers with independent train and test sets were used for AU detection. AU detection was quantified using area under the ROC curve and F1. Baseline results for PCA with Gabor features were comparable with previous research. With some notable exceptions, DR improved AU detection relative to no-DR.

.

Multi-view super resolution


 

In this project, we explore various methods to restore the certain view of the faces to help real-time person identification systems. We characterize the differences between two distinct views as the results of intensity changes and spatial displacements. Thus, a facial image can be considered as a combination of its texture and shape vectors, which encode the differences in terms of intensities and the spatial-displacement information, respectively. In the first stage of our algorithm, we focus on the estimation of the texture vectors, with the assumption that a test input and the corresponding face to be reconstructed share similar shape vectors. The texture vectors are initially estimated using patch-based eigentransformation via a linear mapping relationship being established. In the second stage, we employ an optical-flow-based method to estimate the difference in shape vectors between the initially estimated results and the target HR image. Having selected the most similar training samples to the input, optical flow is employed to compute the pixel displacements, which are used to warp the corresponding reference samples with the same view as the target image. The shape vectors of the initially estimated results are then refined based on the warped images generated and the local kernels derived.

Face hallucination based on correspondence relationship

Face-hallucination techniques refer to the methods that reconstruct HR facial images based on the prior information learned from a single or a set of face training samples. It has been widely used for practical face recognition and detection, as facial images captured by video cameras are often blurred and of low resolution. In our framework, we consider each image as a combination of a texture vector and a shape vector, which encode the differences in terms of pixel intensities and the displacement of each pixel with respect to a set of selected reference images. Thus, our proposed method is composed of two stages of reconstruction to restore the two vectors. In the first stage, based on the correspondence derived between an interpolated low-resolution (LR) face and its corresponding HR face, we employ a sub-space analysis method to derive the weight of each LR training sample contributing to an input LR face, based on the assumptions that the pixels inside patches at the same position refer to the same type of facial feature, and that sub-pixel spatial displacement can be neglected if the patch size is small. The same weights are then applied to the corresponding HR patches at the same position to generate the initially estimated results. In the second stage, we use optical flow to compute the sub-pixel movement of each pixel so as to compensate the spatial distortions in the initially estimated results. The compensation of the sub-pixel movement is based on a derived local kernel, which describes the spatial arrangement of pixels in a local neighborhood. Based on the similar HR faces searched and the assumption that an estimated local kernel should be more dependent on those samples with a similar local structure, we characterize the expected local kernels based on maximizing the likelihood of a multivariate Gaussian distribution.

Honors and Awards

Professional services

Courses Learned (Click to see the list of courses)

Friends & Collaborators