In this talk, we present a derivative-free optimization algorithm that exploits the smooth substructure of the finite minimax problem. Following the research of Burke, Lewis and Overton (2005), we use the framework of their gradient sampling algorithm. The algorithm calculates an approximate gradient for each of the active functions of the finite max function and uses these to generate an approximate subdifferential. The negative projection of $0$ onto this set is used as a descent direction in an Armijo-like line search. We also present a robust version of the algorithm, which includes the `almost active' functions, i.e., those active within a sampling ball of the current iteration, in the calculation of the approximate subdifferential. The steepest descent-like framework enables us to analyze the convergence of both algorithms directly, showing that either $f(x^k)$ goes to $\infty$ or every cluster point is a Clarke stationary point. A stopping condition is presented for both the regular and robust versions of the algorithm, and using the Goldstein approximate subdifferential, extended analysis is presented for a robust stopping condition. Numerical results are presented and a performance comparison is made between the regular and robust versions of the algorithm for three specific approximate gradients. |