资源论文BACK PACK: PACKING MORE INTO BACKPROP

BACK PACK: PACKING MORE INTO BACKPROP

2020-01-02 | |  68 |   46 |   0

Abstract

Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep-learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done na??vely, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BACK PACK1, an efficient framework built on top of P Y T ORCH, that extends the backpropagation algorithm to extract additional information from firstand second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.

上一篇:IMPROVING ADVERSARIAL ROBUSTNESS REQUIRESR EVISITING MISCLASSIFIED EXAMPLES

下一篇:LEARNING TO SOLVE THE CREDIT ASSIGNMENT PROBLEM

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...