Learning Cross-domain Information Transfer for Location Recognition and
Clustering
Abstract
Estimating geographic location from images is a challenging problem that is receiving recent attention. In contrast to many existing methods that primarily model discriminative information corresponding to different locations, we propose joint learning of information that images across locations share and vary upon. Starting with generative and discriminative subspaces pertaining to domains, which are obtained by a hierarchical grouping of images from adjacent locations, we present a top-down approach that ?rst models cross-domain information transfer by utilizing the geometry of these subspaces, and then encodes the model results onto individual images to infer their location We report competitive results for location recognition and clustering on two public datasets, im2GPS and San Francisco, and empirically validate the utility of various desig choices involved in the approach.