In this talk we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence. The modifications consist essentially of the reduction of the size of the steps whenever a sufficient decrease condition on the function values is not verified. When such a condition is satisfied, the step size can be reset to the step size maintained by the ES themselves, as long as this latter one is sufficiently large. Adapting the ES algorithms to handle linearly constrained problems has been also investigated. Our numerical experiments have shown that a modified version of CMA-ES (a relevant instance of the considered ES) is capable of further minimization progress within moderate budgets. Moreover, we have observed that such an improvement in efficiency comes without deteriorating significantly the behavior of the underlying method in the presence of nonconvexity. |