资源论文Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition

Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition

2020-04-06 | |  78 |   54 |   0

Abstract

This work presents a framework to recognise signer indepen- dent mouthings in continuous sign language, with no manual annotations needed. Mouthings represent lip-movements that correspond to pronun- ciations of words or parts of them during signing. Research on sign lan- guage recognition has focused extensively on the hands as features. But sign language is multi-modal and a full understanding particularly with respect to its lexical variety, language idioms and grammatical structures is not possible without further exploring the remaining information chan- nels. To our knowledge no previous work has explored dedicated viseme recognition in the context of sign language recognition. The approach is trained on over 180.000 unlabelled frames and reaches 47.1% precision on the frame level. Generalisation across individuals and the influence of context-dependent visemes are analysed.

上一篇:Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Ob jects

下一篇:Shrinkage Expansion Adaptive Metric Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...