资源论文Experienced Optimization with Reusable Directional Model for Hyper-Parameter Search?

Experienced Optimization with Reusable Directional Model for Hyper-Parameter Search?

2019-11-05 | |  59 |   35 |   0
Abstract Hyper-parameter selection is a crucial yet difficult issue in machine learning. For this problem, derivative-free optimization has being playing an irreplaceable role. However, derivative-free optimization commonly requires a lot of hyperparameter samples, while each sample could have a high cost for hyper-parameter selection due to the costly evaluation of a learning model. To tackle this issue, in this paper, we propose an experienced optimization approach, i.e., learning how to optimize better from a set of historical optimization processes. From the historical optimization processes on previous datasets, a directional model is trained to predict the direction of the next good hyper-parameter. The directional model is then reused to guide the optimization on learning new datasets. We implement this mechanism within a state-of-the-art derivative-free optimization method SR ACOS, and conduct experiments on learning the hyper-parameters of heterogeneous ensembles and neural network architectures. Experimental results verify that the proposed approach can significantly improve the learning accuracy within a limited hyper-parameter sample budget.

上一篇:Summarizing Source Code with Transferred API Knowledge

下一篇:Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications?

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...