neural_graph_collaborative_filtering
This is our Tensorflow implementation for the paper:
Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua (2019). Neural Graph Collaborative Filtering, Paper in ACM DL or Paper in arXiv. In SIGIR'19, Paris, France, July 21-25, 2019.
Author: Dr. Xiang Wang (xiangwang at u.nus.edu)
Neural Graph Collaborative Filtering (NGCF) is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivities in user-item bipartite graph by performing embedding propagation.
If you want to use our codes and datasets in your research, please cite:
@inproceedings{NGCF19,
author = {Xiang Wang and
Xiangnan He and
Meng Wang and
Fuli Feng and
Tat{-}Seng Chua},
title = {Neural Graph Collaborative Filtering},
booktitle = {Proceedings of the 42nd International {ACM} {SIGIR} Conference on
Research and Development in Information Retrieval, {SIGIR} 2019, Paris,
France, July 21-25, 2019.},
pages = {165--174},
year = {2019},
}The code has been tested running under Python 3.6.5. The required packages are as follows:
tensorflow == 1.8.0
numpy == 1.14.3
scipy == 1.1.0
sklearn == 0.19.1
The instruction of commands has been clearly stated in the codes (see the parser function in NGCF/utility/parser.py).
Gowalla dataset
python NGCF.py --dataset gowalla --regs [1e-5] --embed_size 64 --layer_size [64,64,64] --lr 0.0001 --save_flag 1 --pretrain 0 --batch_size 1024 --epoch 400 --verbose 1 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1]
Amazon-book dataset
python NGCF.py --dataset amazon-book --regs [1e-5] --embed_size 64 --layer_size [64,64,64] --lr 0.0005 --save_flag 1 --pretrain 0 --batch_size 1024 --epoch 200 --verbose 50 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1]
Some important arguments:
alg_type
ngcf (by default), proposed in Neural Graph Collaborative Filtering, SIGIR2019. Usage: --alg_type ngcf.
gcn, proposed in Semi-Supervised Classification with Graph Convolutional Networks, ICLR2018. Usage: --alg_type gcn.
gcmc, propsed in Graph Convolutional Matrix Completion, KDD2018. Usage: --alg_type gcmc.
It specifies the type of graph convolutional layer.
Here we provide three options:
adj_type
ngcf (by default), where each decay factor between two
connected nodes is set as 1(out degree of the node), while each node is
also assigned with 1 for self-connections. Usage: --adj_type ngcf.
plain, where each decay factor between two connected nodes is set as 1. No self-connections are considered. Usage: --adj_type plain.
norm, where each decay factor bewteen two connected nodes is set as 1/(out degree of the node + self-conncetion). Usage: --adj_type norm.
gcmc, where each decay factor between two connected
nodes is set as 1/(out degree of the node). No self-connections are
considered. Usage: --adj_type gcmc.
It specifies the type of laplacian matrix where each entry defines the decay factor between two connected nodes.
Here we provide four options:
node_dropout
It indicates the node dropout ratio, which randomly blocks a particular node and discard all its outgoing messages. Usage: --node_dropout [0.1] --node_dropout_flag 1
Note that the arguement node_dropout_flag also needs to be set as 1, since the node dropout could lead to higher computational cost compared to message dropout.
mess_dropout
It indicates the message dropout ratio, which randomly drops out the outgoing messages. Usage --mess_dropout [0.1,0.1,0.1].
We provide two processed datasets: Gowalla and Amazon-book.
train.txt
Train file.
Each line is a user with her/his positive interactions with items: userIDt a list of itemIDn.
test.txt
Test file (positive instances).
Each line is a user with her/his positive interactions with items: userIDt a list of itemIDn.
Note that here we treat all unobserved interactions as the negative instances when reporting performance.
user_list.txt
User file.
Each line is a triplet (org_id, remap_id) for one user, where org_id and remap_id represent the ID of the user in the original and our datasets, respectively.
item_list.txt
Item file.
Each line is a triplet (org_id, remap_id) for one item, where org_id and remap_id represent the ID of the item in the original and our datasets, respectively.
还没有评论,说两句吧!
热门资源
DuReader_QANet_BiDAF
Machine Reading Comprehension on DuReader Usin...
ETD_cataloguing_a...
ETD catalouging project using allennlp
allennlp_extras
allennlp_extras Some utilities build on top of...
allennlp-dureader
An Apache 2.0 NLP research library, built on Py...
honk-honk-motherf...
honk-honk-motherfucker
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com