资源论文Deriving Neural Architectures from Sequence and Graph Kernels

Deriving Neural Architectures from Sequence and Graph Kernels

2020-03-09 | |  63 |   59 |   0

Abstract

The design of neural architectures for structured objects is typically guided by experimental insights rather than a formal process. In this work, we appeal to kernels over combinatorial structures, such as sequences and graphs, to derive appropriate neural operations. We introduce a class of deep recurrent neural operations and formally characterize their associated kernel spaces. Our recurrent modules compare the input to virtual reference objects (cf. filters in CNN) via th kernels. Similar to traditional neural operations, these reference objects are parameterized and directly optimized in end-to-end training. We empirically evaluate the proposed class of neural ar chitectures on standard applications such as language modeling and molecular graph regression, achieving state-of-the-art results across these ap plications.

上一篇:Nonnegative Matrix Factorization for Time Series Recovery From a Few Temporal Aggregates

下一篇:Scalable Multi-Class Gaussian Process Classification using Expectation Propagation

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...