Representation Learning with Weighted Inner Product
for Universal Approximation of General Similarities
Abstract
We propose weighted inner product similarity
(WIPS) for neural network-based graph embedding. In addition to the parameters of neural networks, we optimize the weights of the inner product by allowing positive and negative values. Despite its simplicity, WIPS can approximate arbitrary
general similarities including positive definite, conditionally positive definite, and indefinite kernels.
WIPS is free from similarity model selection, since
it can learn any similarity models such as cosine
similarity, negative Poincare distance and negative ´
Wasserstein distance. Our experiments show that
the proposed method can learn high-quality distributed representations of nodes from real datasets,
leading to an accurate approximation of similarities
as well as high performance in inductive tasks