Sparse optimization has found interesting applications in many areas such as machine learning, signal processing, compressive sensing, medical imaging, etc. This talk introduces a ``smoothing" approach to sparse optimization that does not directly generate a smooth function but produces an unconstrained dual problem whose objective is differentiable and enjoys a ``restricted" strongly convex property. Not only can one apply a rich set of classic techniques such as gradient descent, line search, and quasi-Newton methods to this dual problem, exact sparse solutions and global linear convergence are guaranteed. In addition, parallelizing the algorithm becomes very easy. Numerical examples with tera-scale data are presented. |