资源论文Stochastic Training of Graph Convolutional Networks with Variance Reduction

Stochastic Training of Graph Convolutional Networks with Variance Reduction

2020-03-19 | |  54 |   42 |   0

Abstract

Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.

上一篇:prDeep: Robust Phase Retrieval with a Flexible Deep Network

下一篇:Does Distributionally Robust Supervised Learning Give Robust Classifiers

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...