资源论文Symbolic inductive bias for visually grounded learning of spoken language

Symbolic inductive bias for visually grounded learning of spoken language

2019-09-18 | |  88 |   40 |   0 0 0
Abstract A widespread approach to processing spoken language is to first automatically transcribe it into text. An alternative is to use an end-to-end approach: recent works have proposed to learn semantic embeddings of spoken language from images with spoken captions, without an intermediate transcription step. We propose to use multitask learning to exploit existing transcribed speech within the end-to-end setting. We describe a three-task architecture which combines the objectives of matching spoken captions with corresponding images, speech with text, and text with images. We show that the addition of the SPEECH/TEXT task leads to substantial performance improvements on image retrieval when compared to training the SPEECH/IMAGE task in isolation. We conjecture that this is due to a strong inductive bias transcribed speech provides to the model, and offer supporting evidence for this.

上一篇:Sentence-Level Evidence Embedding for Claim Verification with Hierarchical Attention Networks

下一篇:The Risk of Racial Bias in Hate Speech Detection

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...