Abstract
Online optimization has been a successful framework for solving large-scale problems under computational constraints and partial information. Current methods for online convex optimization require either a projection or exact gradient computation at each step, both of which can be prohibitively expensive for large-scale applications. At the same time, there is a growing trend of nonconvex optimization in machine learning community and a need for online methods. Continuous DR-submodular functions, which exhibit a natural diminishing returns condition, have recently been proposed as a broad class of non-convex functions which may be efficiently optimized. Although online methods have been introduced, they suffer from similar problems. In this work, we propose Meta-Frank-Wolfe, the first online projectionfree algorithm that uses stochastic gradient estimates. The algorithm relies on a careful sampling of gradients in each round and achieves the optimal O( ) adversarial regret bounds for convex and continuous submodular optimization. We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an O( ) stochastic regret bound for convex and continuous submodular optimization. We apply our methods to develop a novel “lifting” framework for the online discrete submodular maximization and also see that they outperform current state-of-the-art techniques on various experi ments. 1 Yale Institute for Network Science, Yale UniversityHaven, CT, USA 2 Department of Electrical Engineering, YaUniversity 3 Department of Computer Science, Yale Univers4 Department of Electrical and Systems Engineering, Univeof Pennsylvania, Philadelphia, PA, USA. Correspondence toChen
. Proceedings of the 35 th International Conference on MachLearning, Stockholm, Sweden, PMLR 80, 2018. Copyright 201by the author(s).