Title
Learning attributes from human gaze
Date Issued
11 May 2017
Access level
metadata only access
Resource Type
conference paper
Author(s)
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
While semantic visual attributes have been shown useful for a variety of tasks, many attributes are difficult to model computationally. One of the reasons for this difficulty is that it is not clear where in an image the attribute lives. We propose to tackle this problem by involving humans more directly in the process of learning an attribute model. We ask humans to examine a set of images to determine if a given attribute is present in them, and we record where they looked. We create gaze maps for each attribute, and use these gaze maps to improve attribute prediction models. For test images we do not have gaze maps available, so we predict them based on models learned from collected gaze maps for each attribute of interest. Compared to six baselines, we improve prediction accuracies on attributes of faces and shoes, and we show how our method might be adapted for scene images. We demonstrate additional uses of our gaze maps for visualization of attribute models and learning 'schools of thought' between users in terms of their understanding of the attribute.
Start page
510
End page
519
Language
English
OCDE Knowledge area
Ciencias de la computación
Scopus EID
2-s2.0-85020229200
ISBN
9781509048229
Source
Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017
Resource of which it is part
Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017
Conference
17th IEEE Winter Conference on Applications of Computer Vision, WACV 2017
Sources of information: Directorio de Producción Científica Scopus