资源论文PAIR NORM :TACKLING OVERSMOOTHING IN GNN S

PAIR NORM :TACKLING OVERSMOOTHING IN GNN S

2020-01-02 | |  74 |   46 |   0

Abstract

The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PAIR N ORM, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PAIR N ORM is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PAIR N ORM makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs.

上一篇:LEARNING TO LEARN BY ZEROTH -O RDER ORACLE

下一篇:NEUR QU RI: NEURAL QUESTION REQUIREMENTI NSPECTOR FOR ANSWERABILITY PREDICTIONIN MACHINE READING COMPREHENSION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...