资源论文An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

2020-03-04 | |  56 |   40 |   0

Abstract

We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ε > 0, an ε-optimal and ε-feasible solution can be computed within 图片.png DFAL iterations, which require 图片.png proximal gradien computations and communications per node in total, where 图片.png denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

上一篇:Training Deep Convolutional Neural Networks to Play Go

下一篇:Multi-instance multi-label learning in the presence of novel class instances

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...