Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the $l_1$ minimization method for identifying a sparse vector from linear samples. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones. |