资源论文Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation

Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation

2019-12-23 | |  46 |   35 |   0

Abstract

Despite the recent advances in automatically describing image contents, their applications have been mostly limited to image caption datasets containing natural images (e.g., Flickr 30k, MSCOCO). In this paper, we present a deep learning model to effificiently detect a disease from an image and annotate its contexts (e.g., location, severity and the affected organs). We employ a publicly available radiology dataset of chest x-rays and their reports, and use its image annotations to mine disease names to train convolutional neural networks (CNNs). In doing so, we adopt various regularization techniques to circumvent the large normalvs-diseased cases bias. Recurrent neural networks (RNNs) are then trained to describe the contexts of a detected disease, based on the deep CNN features. Moreover, we introduce a novel approach to use the weights of the already trained pair of CNN/RNN on the domain-specifific image/text dataset, to infer the joint image/text contexts for composite image labeling. Signifificantly improved image annotation results are demonstrated using the recurrent neural cascade model by taking the joint image/text contexts into account

上一篇:Exploiting Spectral-Spatial Correlation for Coded Hyperspectral Image Restoration

下一篇:Joint Training of Cascaded CNN for Face Detection

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...