资源论文Modeling and Synthesis of Facial Motion Driven by Speech

Modeling and Synthesis of Facial Motion Driven by Speech

2020-03-26 | |  70 |   41 |   0

Abstract

We introduce a novel approach to modeling the dynamics of human facial motion induced by the action of speech for the purpose of synthesis. We represent the tra jectories of a number of salient fea- tures on the human face as the output of a dynamical system made up of two subsystems, one driven by the deterministic speech input, and a second driven by an unknown stochastic input. Inference of the model (learning) is performed automatically and involves an extension of in- dependent component analysis to time-depentend data. Using a shape- texture decompositional representation for the face, we generate facial image sequences reconstructed from synthesized feature point positions.

上一篇:Image Similarity Using Mutual Information of Regions

下一篇:An Information-Based Measure for Grouping Quality

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...