资源论文Jointly Learning Energy Expenditures and Activities using Egocentric Multimodal Signals

Jointly Learning Energy Expenditures and Activities using Egocentric Multimodal Signals

2019-12-06 | |  59 |   49 |   0
Abstract Physiological signals such as heart rate can provide valuable information about an individual’s state and activity. However, existing work on computer vision has not yet explored leveraging these signals to enhance egocentric video understanding. In this work, we propose a model for reasoning on multimodal data to jointly predict activities and energy expenditures. We use heart rate signals as privileged self-supervision to derive energy expenditure in a training stage. A multitask objective is used to jointly optimize the two tasks. Additionally, we introduce a dataset that contains 31 hours of egocentric video augmented with heart rate and acceleration signals. This study can lead to new applications such as a visual calorie counter.

上一篇:Joint Registration and Representation Learning for Unconstrained Face Identification

下一篇:L2-Net: Deep Learning of Discriminative Patch Descriptor in Euclidean Space

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...