资源论文Learning Bilingual Lexicons Using the Visual Similarity of Labeled Web Images

Learning Bilingual Lexicons Using the Visual Similarity of Labeled Web Images

2019-11-12 | |  86 |   47 |   0
Abstract Speakers of many different languages use the Internet. A common activity among these users is uploading images and associating these images with words (in their own language) as captions, ?lenames, or surrounding text. We use these explicit, monolingual, image-to-word connections to successfully learn implicit, bilingual, word-to-word translations. Bilingual pairs of words are proposed as translations if their corresponding images have similar visual features. We generate bilingual lexicons in 15 language pairs, focusing on words that have been automatically identi?ed as physical objects. The use of visual similarity substantially improves performance over standard approaches based on string similarity: for generated lexicons with 1000 translations, including visual information leads to an absolute improvement in accuracy of 8-12% over string edit distance alone.

上一篇:Just an Artifact: Why Machines Are Perceived as Moral Agents

下一篇:Semantic Relationship Discovery with Wikipedia Structure

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...