We consider constrained nonsmooth, nonconvex optimization problems, where both the objective and the constraints may be nonsmooth and nonconvex and are not assumed to have any special structure. In 2012, Curtis and Overton presented a gradient-sampling-based SQP algorithm with a steering strategy to control exact penalty penalization, proving convergence results that generalize the results of Burke, Lewis and Overton and Kiwiel for the unconstrained problem. This algorithm uses BFGS approximation to define a ``Hessian" matrix $H$ that appears in the QPs, but in order to obtain convergence results, upper and lower bounds on the eigenvalues of $H$ must be enforced. On the other hand, Lewis and Overton have argued that in the unconstrained case, a simple BFGS method is much more efficient in practice than gradient sampling, although the Hessian approximation $H$ typically becomes very ill conditioned and no general convergence results are known. We consider an SQP method for the constrained problem based on BFGS approximation without gradient sampling, and ask the question: does allowing ill-conditioning in $H$ lead to the same desirable convergence behavior in practice as in the unconstrained case, or does the disadvantage of solving ill-conditioned QPs overcome any benefit gained by ill-conditioning? We test the algorithm on some simple examples as well as some challenging applied problems from feedback control. |