ICCOPT 2013 Talk, Room 2.2, Wednesday, July 31, 18:00-19:30

 Speaker: Dan Garber, Technion - Israel Institute of Technology, Israel
 Title: A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization
 Co-authors: Elad Hazan

 Abstract:
Scientific Program

The conditional gradient method is a long-studied first-order optimization method for smooth convex optimization. Its main appeal is the low computational complexity: the conditional gradient method requires only a single linear optimization operation per iteration. In contrast, other first-order methods require a projection computation every iteration, which is the computational bottleneck for many large scale optimization tasks. The drawback of the conditional gradient method is its relatively slow convergence rate. In this work we present a new conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets with a linear convergence rate - an exponential improvement over previous results. We extend the algorithm to the online and stochastic settings, and give the first conditional gradient algorithm that attains optimal regret bounds for both arbitrary convex losses and strongly convex losses. Our online and stochastic algorithms require a single linear optimization step over the domain per iteration.


 Talk in: Organized Session Wed.D.22 Stochastic and randomized gradient methods for convex optimization
 Cluster: Convex and nonsmooth optimization


 Go to: Wed.D
 Go to: unframed Scientific Program

 Go to: ICCOPT 2013 Main Webpage