资源论文A Computational Model for the Alignment of Hierarchical Scene Representations in Human-Robot Interaction

A Computational Model for the Alignment of Hierarchical Scene Representations in Human-Robot Interaction

2019-11-15 | |  105 |   40 |   0

Abstract
The ultimate goal of human-robot interaction is to enable the robot to seamlessly communicate with a human in a natural human-like fashion. Most work in this field concentrates on the speech interpretation and gesture recognition side assuming that a propositional scene representation is available. Less work was dedicated to the extraction of relevant scene structures that underlies these propositions. As a consequence, most approaches are restricted to place recognition or simple table top settings and do not generalize to more complex room setups. In this paper, we propose a hierarchical spatial model that is empirically motivated from psycholinguistic studies. Using this model the robot is able to extract scene structures from a timeof-flight depth sensor and adjust its spatial scene representation by taking verbal statements about partial scene aspects into account. Without assuming any pre-known model of the specific room, we show that the system aligns its sensor-based room representation to a semantically meaningful representation typically used by the human descriptor.

上一篇:Evaluating Description and Reference Strategies in a Cooperative Human-Robot Dialogue System

下一篇:Boosting Constrained Mutual Subspace Method for Robust Image-Set Based Object Recognition Xi Li Kazuhiro Fukui Nanning Zheng

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...