Abstract
Attributed network embedding plays an important
role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed
either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces
which incur significant loss of representation accuracy. To this end, we present a new Low-Bit
Quantization for Attributed Network Representation Learning model (LQANR for short) that can
learn compact node representations with low bitwidth values while preserving high representation
accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node
representations and the layer aggregation weights
under the low-bit quantization constraint. Because
the new learning function falls into the category of
mixed integer optimization, we propose an efficient
mixed-integer based alternating direction method
of multipliers (ADMM) algorithm as the solution.
Experiments on real-world node classification and
link prediction tasks validate the promising results
of the proposed LQANR model