资源论文Semantic Question-Answering with Video and Eye-Tracking Data: AI Foundations for Human Visual Perception Driven Cognitive Film Studies

Semantic Question-Answering with Video and Eye-Tracking Data: AI Foundations for Human Visual Perception Driven Cognitive Film Studies

2019-11-25 | |  84 |   39 |   0
Abstract We present a computational framework for the grounding and semantic interpretation of dynamic visuo-spatial imagery consisting of video and eyetracking data. Driven by cognitive film studies and visual perception research, we demonstrate key technological capabilities aimed at investigating attention & recipient effects vis-a-vis the motion picture; this encompasses high-level analysis of subject’s visual fixation patterns and correlating this with (deep) semantic analysis of the dynamic visual data (e.g., fixation on movie characters, influence of cinematographic devices such as cuts). The framework and its application as a general AI-based assistive technology platform —integrating vision & KR— for cognitive film studies is highlighted.

上一篇:Balancing Appearance and Context in Sketch Interpretation

下一篇:Stochastic and-or Grammars: A Unified Framework and Logic Perspective

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...