资源论文Characterizing Implicit Bias in Terms of Optimization Geometry

Characterizing Implicit Bias in Terms of Optimization Geometry

2020-03-20 | |  73 |   73 |   0

Abstract

We study the implicit bias of generic optimization methods—mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms—when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step–size and momentum.

上一篇:Improved nearest neighbor search using auxiliary information and priority functions

下一篇:Black-box Adversarial Attacks with Limited Queries and Information

用户评价
全部评价

热门资源

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • dynamical system ...

    allows to preform manipulations of heavy or bul...