资源论文DEEP DOUBLE DESCENT:W HERE BIGGER MODELS AND MORE DATA HURT

DEEP DOUBLE DESCENT:W HERE BIGGER MODELS AND MORE DATA HURT

2020-01-02 | |  60 |   45 |   0

Abstract

We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity, and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.

上一篇:IMPLICIT BIAS OF GRADIENT DESCENT BASED AD -VERSARIAL TRAINING ON SEPARABLE DATA

下一篇:FSP OOL :L EARNING SET REPRESENTATIONS WITHF EATUREWISE SORT POOLING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...