资源论文Algorithms and hardness results for parallel large margin learning

Algorithms and hardness results for parallel large margin learning

2020-01-08 | |  69 |   41 |   0

Abstract

We study the fundamental problem of learning an unknown large-margin halfspace in the context of parallel computation. Our main positive result is a parallel algorithm for learning a large-margin halfspace that is based on interior point methods from convex optimization and fast parallel algorithms for matrix computations. We show that this algorithm learns an unknown 图片.png-margin halfspace over n dimensions using 图片.pngprocessors and runs in time 图片.pngIn contrast, naive parallel algorithms that learn a 图片.png-margin halfspace in time that depends polylogarithmically on n have 图片.png runtime dependence on 图片.png. Our main negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We give an information-theoretic proof that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized: the ability to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of successive stages of boosting that are required.

上一篇:Automated Refinement of Bayes Networks’ Parameters based on Test Ordering Constraints

下一篇:Query-Aware MCMC

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...