Return to Colloquia & Seminar listing
Accelerated Bregman Proximal Gradient Method for Relatively Smooth Convex Optimization
Mathematics of Data & DecisionsSpeaker: | Lin Xiao, Microsoft Research |
Location: | 1147 MSB |
Start time: | Tue, May 28 2019, 4:10PM |
We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. The relatively smooth condition is much weaker than the standard assumption of uniform Lipschitz continuity of the gradients, thus significantly increases the scope of potential applications. We present accelerated Bregman proximal gradient (ABPG) methods that employ the Bregman distance of the reference function as the proximity measure. The convergence rate of this method is determined by a triangle scaling property of the Bregman distance. While we cannot prove a priori guarantee yet, the adaptive variants of ABGP generate simple posterior numerical certificates for the actual attained convergence rates. We present numerical experiments with three applications: D-optimal experiment design, Poisson linear inverse problem, and relative-entropy nonnegative regression. In all experiments, we obtain numerical certificates for the O(1/k^2) rate, matching the optimal rate for minimizing uniformly smooth functions. This is joint work with Filip Hanzely and Peter Richtarik.