Reliable parameter estimation of nonlinear systems can aid scientific discovery, improve understanding of fundamental processes, and provide effective models for subsequent optimization. The success of these nonlinear programming techniques for parameter estimation has led to continued problem size increases as we improve model rigor and complexity. The growth of these optimization problems continues to outpace the capabilities of serial solution approaches to solve these problems on modern desktop computers, which drives the development of efficient parallel solution algorithms. Fortunately, while these problems are large-scale, they are inherently block structured. Common block structures arise in a number of problem types, including dynamic optimization, parameter estimation, and nonlinear stochastic programming. Many tools and algorithms have been developed to exploit the block structure and solve the NLP in parallel. In this presentation, we will present two approaches for efficient parallel solution of very large-scale nonlinear parameter estimation problems; a progressive hedging approach, and an implicit Schur-complement decomposition based on parallel solution of the KKT system arising in interior-point methods. We will show timing and speedup results for several estimation problems on both shared and distributed computing architectures, and provide conclusions regarding the implementation and performance of these two methods. |