Title
Multimodal Human Action Recognition Based on a Fusion of Dynamic Images Using CNN Descriptors
Date Issued
15 January 2019
Access level
metadata only access
Resource Type
conference paper
Author(s)
Universidad Federal de Ouro Preto
Universidad Federal de Ouro Preto
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
In this paper, we propose the use of dynamic-images-based approach for action recognition. Specifically, we exploit the multimodal information recorded by a Kinect sensor (RGB-D and skeleton joint data). We combine several ideas from rank pooling and skeleton optical spectra to generate dynamic images to summarize an action sequence into single flow images. We group our dynamic images into five groups: a dynamic color group (DC); a dynamic depth group (DD) and three dynamic skeleton groups (DXY, DYZ, DXZ). As action is composed of different postures along time, we generated N different dynamic images with the main postures for each dynamic group. Next, we applied a pre-trained flow-CNN to extract spatiotemporal features with a max-mean aggregation. The proposed method was evaluated on a public benchmark dataset, the UTD-MHAD, and achieved the state-of-the-art result.
Start page
95
End page
102
Language
English
OCDE Knowledge area
Radiología, Medicina nuclear, Imágenes médicas
Scopus EID
2-s2.0-85062229311
Resource of which it is part
Proceedings - 31st Conference on Graphics, Patterns and Images, SIBGRAPI 2018
ISBN of the container
978-153869264-6
Conference
31st Conference on Graphics, Patterns and Images, SIBGRAPI 2018
Sponsor(s)
The authors thank the Graduate Program in Computer Science (PPGCC) at the Federal University of Ouro Preto (UFOP), the Coordination for the Improvement of Higher Level Personneland (CAPES) and the funding Brazilian agency CNPq.
Sources of information: Directorio de Producción Científica Scopus