资源论文Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text

Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text

2019-09-25 | |  176 |   61 |   0

 Abstract There exist few text-specifific methods for unsupervised anomaly detection, and for those that do exist, none utilize pre-trained models for distributed vector representations of words. In this paper we introduce a new anomaly detection method—Context Vector Data Description (CVDD)—which builds upon word embedding models to learn multiple sentence representations that capture multiple semantic contexts via the self-attention mechanism. Modeling multiple contexts enables us to perform contextual anomaly detection of sentences and phrases with respect to the multiple themes and concepts present in an unlabeled text corpus. These contexts in combination with the self-attention weights make our method highly interpretable. We demonstrate the effectiveness of CVDD quantitatively as well as qualitatively on the wellknown Reuters, 20 Newsgroups, and IMDB Movie Reviews datasets

上一篇:Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

下一篇:Collocation Classification with Unsupervised Relation Vectors

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...