A Tensorflow Implementation of Gated Graph Neural Networks (GGNN) for Graph Classification
This is a Tensorflow implementation of the Gated Graph Neural Networks (GGNN) as described in the paper Gated Graph Sequence Neural Networks, ICLR 2016 by Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel.
Tricks to improve training time and faster convergence:
Batch graphs with similar size together instead of randomly shuffling and batch.
Use dense graph representation for small graphs, and sparse graph representation for large graphs.
Java small: This dataset is based on the dataset of Allamanis et al. (ICML'2016), with the difference that training/validation/test is split by-project rather than by-file. This dataset contains 9 Java projects for training, 1 for validation and 1 for testing. Overall, it contains about 700K examples.
Java medium: A dataset of the 1000 top-starred Java projects from GitHub. It contains 800 projects for training, 100 for validation and 100 for testing. Overall, it contains about 4M examples.
Java large: A dataset of the 9500 top-starred Java projects from GitHub that were created since January 2007. It contains 9000 projects for training, 200 for validation and 300 for testing. Overall, it contains about 16M examples.
What is GGNN?
Solve graph-structured data and problems
A gated propagation model (same idea as GRU) to compute node representations
Unroll recurrence for a fixed number of steps and use backpropagation through time