Title
Semantic 3d mapping from deep image segmentation
Date Issued
02 February 2021
Access level
open access
Resource Type
research article
Author(s)
Martín F.
González F.
Guerrero J.M.
Ginés J.
Rey Juan Carlos University
Abstract
The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.
Start page
1
End page
15
Volume
11
Issue
4
Language
English
OCDE Knowledge area
Robótica, Control automático
Scopus EID
2-s2.0-85102110873
Source
Applied Sciences (Switzerland)
Sponsor(s)
Funding: This work was supported by the EU-funded projects RobMoSys ITP MROS under Grant Agreement No. 732410.
Sources of information: Directorio de Producción Científica Scopus