VAEGAN: A Collaborative Filtering Framework based on
Adversarial Variational Autoencoders
Abstract
Recently, Variational Autoencoders (VAEs) have
been successfully applied to collaborative filtering
for implicit feedback. However, the performance
of the resulting model depends a lot on the expressiveness of the inference model and the latent representation is often too constrained to be expressive enough to capture the true posterior distribution. In this paper, a novel framework named
VAEGAN is proposed to address the above issue.
In VAEGAN, we first introduce Adversarial Variational Bayes (AVB) to train Variational Autoencoders with arbitrarily expressive inference model. By utilizing Generative Adversarial Networks (GANs) for implicit variational inference, the
inference model provides better approximation to
the posterior and maximum-likelihood assignment.
Then the performance of our model is further improved by introducing an auxiliary discriminative
network using adversarial training to achieve high
accuracy in recommendation. Furthermore, contractive loss is added to the classical reconstruction
cost function as a penalty term to yield robust features and improve the generalization performance.
Finally, we show that the performance of our proposed VAEGAN significantly outperforms state-ofthe-art baselines on several real-world datasets