Derivative-free optimization algorithms are often employed in settings where the evaluation of an objective function (or constraint functions) is a computational bottleneck. For such problems, heuristics are still widely used in practice, often because they admit natural parallelism that allows a user to perform many simultaneous evaluations. In this talk we present our experiences in developing zero-order, model-based methods that perform a user-specified number of simultaneous evaluations. We also discuss gaps between convergence guarantees and performance in practice. |