We consider the minimization of biconvex functions: whose domains consists of a cartesian product of two components, and where the functions are convex in one component while keeping the other fixed and vice versa. Such biconvex functions arise in many settings in contemporary machine learning, such as sparse coding and conditional random fields, among others. For a large subclass of such biconvex functions, we are able to provide an equivalent convex reparameterization: so that one can now solve these both efficiently and tractably. This is in contrast to existing methods that obtain convex upper bounds for such biconvex functions. We also empirically demonstrate the utility of our approach in two applications involving sparse coding and a Gaussian conditional random fields. |