In this talk we present parallelization and extensions of model-building algorithms for derivative-free optimization. Such algorithms are inherently sequential, and these are the first steps towards a fully parallel algorithm. In each iteration, we run several instances of an optimization algorithm with different trust region parameters, and each instance generates at least one point for evaluation. All points are kept in a priority queue and the most promising points are evaluated in parallel when computers are available. We use models from several instances to prioritize the points and allow dynamic prioritization of points to ensure that computational resources are used efficiently in case new information becomes available. A database is used to avoid reevaluation of points. Together, these extensions make it easier to find several local optima and rank them against each other, which is very useful when performing robust optimization. The initial model has so far been built sequentially, and here we present the first results of completely parallel model-building. Empirical testing reveals considerable decreases in the number of function evaluations as well as in the time required to solve problems. We show test results from testing a variety of different parameter settings, most notably different trust region transformations. The intention of these is to make the trust-region sub-problem easier to solve and/or allow longer steps in certain directions. |