Traditional analysis of model based derivative free optimization methods relies on the worst case behavior of the algorithmic steps and the models involved. There are conditions that the models and the iterates have to satisfy to guarantee convergence. Such requirements are difficult or costly to satisfy in practice and are often ignored in practical implementations. We will present a probabilistic view point for such algorithms, showing that convergence still holds even if some properties fail with some small enough probability. We will discuss several settings where this approach is useful and we will discuss advantages of using regularized models in the derivative free setting. |