We consider distributed optimization where $N$ nodes in a network minimize the sum of their individual convex costs subject to a global optimization variable. Such problems encompass many relevant applications like distributed inference, source localization in sensor networks, and distributed machine learning. We show a globally linear convergence rate for a class of distributed augmented Lagrangian algorithms, when the nodes' local costs are twice continuously differentiable and have a bounded Hessian. Further, unlike most of the existing work, we give explicitly the dependence of the convergence rate on the topology (algebraic connectivity) of the underlying network. Numerical simulations confirm the analytical results. |