资源论文WHAT GRAPH NEURAL NETWORKS CANNOT LEARN :D EPTH VS WIDTH

WHAT GRAPH NEURAL NETWORKS CANNOT LEARN :D EPTH VS WIDTH

2019-12-30 | |  86 |   48 |   0

Abstract

This paper studies theoretically the capacity limits of graph neural networks falling within the message-passing framework (GNNmp ). Two main results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node identification, and layer expressiveness. Second, it is discovered that GNNmp can lose a significant portion of their power when their depth and width is restricted. The proposed impossibility statements stem from a new technique that enables the repurposing of seminal results from theoretical computer science and leads to lower bounds for an array of decision, optimization, and estimation problems involving graphs. Strikingly, several of these problems are deemed impossible unless the product of a GNNmp ’s depth and width exceeds (a function of) the graph size; this dependence remains significant even for tasks that appear simple or when considering approximation.

上一篇:FAST NEURAL NETWORK ADAPTATION VIAPARAMETERS REMAPPING

下一篇:CO -ATTENTIVE EQUIVARIANT NEURAL NETWORKS :F OCUSING EQUIVARIANCE ON TRANSFORMATIONSC O-O CCURRING IN DATA

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...