Helen dataset

1.    Description

(excerpt from the paper)

 

In our effort of building a facial feature localization algorithm that can operate reliably and accurately under a broad range of appearance variation, including pose, lighting, expression, occlusion, and individual differences, we realize that it is necessary that the training set include high resolution examples so that, at test time, a high resolution test image can be fit accurately.  Although a number face databases exist, we found none that meet our requirements, particularly the resolution requirement.  Consequently, we constructed a new dataset using annotated Flickr images.

 

Specifically, the dataset was constructed as follows:  First, a large set of candidate photos was gathered using a variety of keyword searches on Flickr.  In all cases the query included the keyword ``portrait'' and was augmented with different terms such as ``family'', ``outdoor'', ``studio'', ``boy'', ``wedding'', etc. (An attempt was made to avoid cultural bias by repeating the queries in several different languages.)  A face detector was run on the resulting candidate set to identify a subset of images that contain sufficiently large faces (greater than 500 pixels in width).  The subset was further filtered by hand to remove false positives, profile views, as well as low quality images.  For each accepted face, we generated a cropped version of the original image that includes the face and a proportional amount of background.  In some cases, the face is very close or in contact with the edge of the original image and is consequently not centered in the cropped image.  Also, the cropped image can contain other face instances since many photos contain more than one person in close proximity.

 

Finally, the images were hand-annotated using Amazon Mechanical Turk to precisely locate the eyes, nose, mouth, eyebrows, and jawline.  (We adopted the same annotation convention as the PUT Face Database .)  To assist the Turk worker in this task, we initialized the point locations to be the result of the STASM  algorithm that had been trained on the PUT database.  However, since the Helen Dataset is much more diverse than PUT, the automatically initialized points were often far from the correct locations.

 

In any case, we found that this particular annotation task required an unusual amount of review and post-processing of the data in order to ensure high quality results.  Ultimately this is attributable to the high number of degrees of freedom involved.  For example, it frequently happened that a Turk worker would permute components (swap eyes and brows or inner lip for outer lip), or alternatively shift the positions of the points sufficiently that their roles would be changed (such as selecting a different vertex to serve as an eye or mouth corner).  Graphical cues in the interface, as well as a training video and qualifying test were employed to assist with the process.  Also, automated processes were developed to enforce consistency and uniformity in the dataset.  In addition to the above, the faces were reviewed at the component level manually by the authors to identify errors in the annotations.  Components with unacceptable error were resubmitted to the Turk for correction.

 

The resulting dataset consists of 2000 training and 330 test images with highly accurate, detailed, and consistent annotations of the primary facial components.  A sampling of the dataset is depicted in the next section

2.    Sample

Description: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\3.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\4.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\5.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\6.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\7.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\8.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\9.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\12.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\13.jpgDescription: \\mickey.ifp.illinois.edu\vuongle2\public_html\helen\data\samples\14.jpg

 

3.    Download

a.     All Images

Part 1 - Part 2 - Part 3 - Part 4 -  Part 5

 

b.    Training and testing selection used in our experiments

Training names Testing names

Test images - Train images - part 1 - Train images - part 2 - Train images - part 3 -  Train images - part 4

 

c.    Annotation

All faces

 

 

4.    Reference

Interactive Facial Feature Localization

Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Boudev, Thomas S. Huang

ECCV2012

 

 

5.    Additional information about the project

a.     Interactive localization Animation

b.    Spotlight video

 

6.    Contact

Vuong Le

vuongle2@gmail.com

vuongle2@illinois.edu