We consider a generic class of iterative convex optimization algorithms which can only perform linear optimization over the feasible region in each iteration. We present the low complexity bounds for these algorithms under both smooth and nonsmooth cases, and establish the optimality of the classic conditional gradient (CG) method. We also introduce some new variants of the CG method and demonstrate their advantages for solving certain convex optimization problems, e.g., those with box-type constraints. |