We present a random block coordinate descent method suited for large convex optimization problems in networks where the information cannot be gather centrally, but rather the information is distributed over the network. Moreover, we focus on optimization problems with linearly coupled constraints (i.e. the constraint set is coupled). Due to the coupling in the constraints we introduce a 2-block variant of random coordinate descent method, that involves at each iteration the closed form solution of an optimization problem only with respect to two block variables while keeping all the other fixed. We prove for the new algorithm an expected convergence rate of order O(1/k) for the function values, where k is the number of iterations. We focus on how to design the probabilities to make this algorithm to converge as fast as possible and we prove that this problem can be recast as a sparse SDP. We also show that for functions with cheap coordinate derivatives the new method is much faster than schemes based on full gradient information. Analysis for rate of convergence in probability is also provided. For strongly convex functions we prove that the new method converges linearly. While the most obvious benefit of randomization is that it can lead to faster algorithms, there are also other benefits of our algorithm. For example, the use of randomization leads to a simpler algorithm, produces a more robust output and can be organized to exploit modern computational architectures. |