Abstract
Learning the representation of shape cues in 2D & 3Dobjects for recognition is a fundamental task in computervision. Deep neural networks (DNNs) have shown promis-ing performance on this task. Due to the large variabilityof shapes, accurate recognition relies on good estimates ofmodel uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpreta-tions for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D & 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.