In this talk we propose a new dual decomposition algorithm for solving large-scale separable convex optimization problems. This algorithm is a combination of three techniques: dual Lagrangian decomposition, smoothing and excessive gap. The algorithm requires only one primal step and two dual steps at each iteration. While the primal step of the algorithm can be done in parallel, the dual step is only matrix vector multiplication. Moreover, the algorithmic parameters are updated automatically without any tuning strategy as in augmented Lagrangian approaches. We analyze the convergence of the algorithm and estimate its $O(1/e)$ worst-case complexity. We then combine this algorithm and the two primal step scheme to obtain a primal-dual switching variant. We discuss the implementation of these algorithms such as distributed implementation as well as inexact variants. Numerical examples are implemented to verify the theoretical development. |