 # Adaptive Computation Methods for Nonlinear Systems

 Last Updated: 29 March 2019 Contact Information: Leigh Tesfatsion Research Professor, and Professor Emerita of Economics, Mathematics, and    Electrical & Computer Engineering Heady Hall 260 Iowa State University Ames, Iowa 50011-1054 http://www2.econ.iastate.edu/tesfatsi/ tesfatsi AT iastate.edu   ## Research Summary

Ideally, analysis tools should adapt to the problem at hand rather than requiring the problem to be adapted to the tools.

In a series of studies with Robert Kalaba and other collaborators, gathered together in the final section, below, three adapative computation techniques are developed to facilitate the direct analysis of real-world nonlinear systems in whatever form they actually appear:

• Nonlocal comparative static analysis for tracking solutions of parameterized nonlinear systems along parameter paths;

• Automatic higher-order partial derivative evaluation through forward-mode derivative arrays;

• Adaptive homotopy methods with the continuation parameter modeled as an intelligent agent moving from 0+0i to 1+0i through the complex plane along an adaptively chosen path in a spiderweb grid.

In economic theory we are often interested in understanding how the solution vector x(b) of a parameterized system of equations F(x,b)=0 varies as the parameter vector b ranges over some interval of interest. The well-known fundamental differential equation of comparative statics,
dx(b)/db = -Fx(x(b),b)-1Fb(x(b),b),
can be used to determine the nature of the change in the solution vector x(b) at a particular value b0 for the parameter vector b. However, this fundamental differential equation is analytically incomplete.

More precisely, the integration of this fundamental differential equation from initial conditions over a parameter interval would typically require the additional algebraic determination of the Jacobian inverse at each step in the integration process. For this reason many economists interpret the fundamental differential equation as a purely algebraic relation that can be used to qualitatively sign the change in the solution at the particular parameter value. Numerical integration using specific functional forms is generally not attempted.

In Kalaba and Tesfatsion (1981) the fundamental differential equation is completed by incorporating ordinary differential equations for the Jacobian inverse. As illustrated for optimal growth and taxation problems in Kalaba, Tesfatsion, and Wang (1981), this analytically complete system of ordinary differential equations can be used to track solutions for systems of parameterized nonlinear equations over parameter intervals. Thus, this complete differential system facilitates the nonlocal comparative static analysis of economic systems.

For equilibrium and optimal growth studies, it is often important to be able to determine the stability and definiteness properties of parameterized Jacobian matrices. These properties can be determined from a knowledge of the eigenvalues of the Jacobian matrix. The complete differential system developed in Kalaba and Tesfatsion (1981) for the nonlocal sensitivity analysis of general nonlinear systems turns out to be useful, as well, for the sensitivity study of parameterized matrices and their associated systems of eigenvalues and eigenvectors.

Specifically, in Kalaba, Spingarn, and Tesfatsion (1981a) it is shown how the eigenvalues and the right and left eigenvectors of a general parameterized matrix can be tracked over parameter intervals by integrating an analytically complete differential system from initial conditions. In Kalaba, Spingarn, and Tesfatsion (1981b) an analogous complete differential system is developed for tracking a single eigenvalue of a parameterized matrix together with one of its corresponding right or left eigenvectors. Finally, in Kalaba, Spingarn, and Tesfatsion (1980) it is shown show how the latter system can be used, in particular, to obtain the largest eigenvalue (Frobenius-Perron root) for any positive square matrix, an important problem arising (e.g.) in the study of Leontief input-output systems.

In the course of implementing the complete differential system developed in Kalaba and Tesfatsion (1981) in various applications, it became clear that an automatic method was needed for evaluating higher-order partial derivatives in order to make the approach practical. In Kalaba, Tesfatsion, and Wang (1983) an algorithm FEED (Fast Efficient Evaluation of Derivatives) is developed for the automatic evaluation of higher-order partial derivatives exact up to round-off and truncation error. FEED overcomes several key difficulties with previously developed automatic differentiation techniques by using forward-mode evaluation based on derivative arrays. Further potentially useful extensions of FEED are developed in Kalaba and Tesfatsion (1986b) and in Kalaba, Plum, and Tesfatsion (1987).

The initial conditions needed to integrate the complete differential system developed in Kalaba and Tesfatsion (1981) for a parameterized nonlinear system F(x,b)=0 from any desired starting point b0 for the parameter vector b are as follows: the solution vector x(b0) at b0, together with evaluations for the adjoint and determinant of the Jacobian matrix Fx(x(b),b) at (x(b0),b0). For many nonlinear problems, finding these needed initial values is a difficult matter in and of itself.

In Kalaba and Tesfatsion (1991) an adaptive homotopy is developed for solving nonlinear systems of equations F(x)=0 that adapts to whatever physical problem is at hand. The key new idea is the replacement of the standard homotopy continuation parameter moving from 0 to 1 along the real line with a "smart agent" able to adaptively determine a solution path from 0+0i to 1+0i in the complex plane using multi-criteria optimization. Specifically, the agent sequentially traces out a path from 0+0i to 1+0i in accordance with the following two criteria: (a) efficiency (take as few steps as possible); and (b) accuracy (avoid regions where calculations become ill-conditioned). It is therefore not necessary for the user of the homotopy to know in advance the location of folds and bifurcation points; the continuation agent applies local step-by-step testing to efficiently track around such regions in the complex plane.

Finally, in Kalaba and Tesfatsion (1990) a Fortran program NASA (Nonlocal Automated Sensitivity Analysis) is developed for solving the complete differential system in Kalaba and Tesfatsion (1981) that knits together all of these various parts. The NASA program incorporates the FEED algorithm developed in Kalaba, Tesfatsion, and Wang (1983) for the automatic evaluation of higher-order partial derivatives. The NASA program also incorporates the adaptive homotopy developed in Kalaba and Tesfatsion (1991) as the means for obtaining all required initial conditions.

The next section explains where the Fortran source code for the NASA program can be downloaded as free open-source software. All of the work summarized above is explained at some length in the articles listed in the final section.

NOTE: It is with immense sadness I report that Robert E. Kalaba, a great scholar, mentor, colleague, and friend, died on September 29, 2004. ## Software and Manual Availability

• NASA Progam:

The NASA (Nonlocal Automated Sensitivity Analysis) Program (html,99K) is a Fortran program that incorporates automated procedures for tracking the solution branches x(b) for nonlinear parameterized systems F(x,b)=0 along user-designated paths for the parameter vector b; see the articles in the previous section. The NASA program is released here as free open-source software by the developers (Robert Kalaba and Leigh Tesfatsion) under the terms of the Artistic License Agreement.

• NASA Program Manual:

Robert Kalaba and Leigh Tesfatsion (1990), "Nonlocal Automated Sensitivity Analysis" (pdf,1M), Computers and Mathematics with Applications, Vol. 20, No. 2, pp. 53-65. ## Research Publications

### Nonlocal Automated Sensitivity Analysis (NASA): Tracking of Solutions for General Parameterized Nonlinear systems

• Leigh Tesfatsion (2001), "Nonlocal Sensitivity Analysis with Automatic Differentiation" (pdf,114K), C. A. Floudas and P. M. Pardalos (eds.), Encyclopedia of Optimization, Kluwer Academic Publishers, Volume 4, pp. 92-97
Abstract: This encyclopedia entry is a summary report on the design and use of the NASA program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1].

• Leigh Tesfatsion (1992), "Nonlocal Automated Comparative Static Analysis", Computational Economics 5(4) (November), pp. 313-331. The published article is available from SpringerLink.
Abstract: This paper reviews work on the development of the NASA program for the automated comparative static analysis of parameterized nonlinear systems over parameter intervals. NASA incorporates a fast and efficient algorithm Feed for the automatic evaluation of higher-order partial derivatives, as well as an adaptive homotopy continuation algorithm for obtaining all required initial conditions. Applications are envisioned for fields such as economics where models tend to be complex and closed-form solutions are difficult to obtain.

• Robert E. Kalaba and Leigh Tesfatsion (1990), "Nonlocal Automated Sensitivity Analysis" (pdf,1M), Computers and Mathematics with Applications Volume 20, Issue 2, pp. 53-65. The published article is available from Science Direct.
Abstract: This article presents and illustrates the applicability of the NASA program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1]. The NASA program incorporates automated procedures for initialization, derivative evaluation, and the tracking of solution branches x(b) along user-designated paths for the parameter vector b.

• Robert E. Kalaba and Leigh Tesfatsion (1986a), "Nonlocal Sensitivity Analysis, Automatic Derivative Evaluation, and Sequential Nonlinear Estimation" (pdf,1.4M), Computational Statistics and Data Analysis 4(2) (July), pp. 79-81. The published article is available from Science Direct.
Abstract: This article summarizes work by the authors on computational methods for nonlinear systems. Section 2 develops a complete set of ordinary differential equations for generating solutions to parameterized systems of nonlinear equations over parameter intervals of interest. Section 3 presents a simple finite algorithm, FEED, for the exact forward-mode automatic evaluation of higher-order partial derivatives using derivative arrays. Section 4 obtains an exact sequential characterization of the solution to a general nonlinear least-squares estimation problem as the duration of the process increases and additional observations are made.

• Robert E. Kalaba, Leigh Tesfatsion, and Jone-Lin Wang (1981), "Local and Nonlocal Comparative Static Analysis of Economic Systems" (pdf,382K), Applied Mathematics and Computation Vol. 9, pp. 227-234. The published article is available from Science Direct.
Abstract: The complete system of ordinary differential equations developed by Kalaba and Tesfatsion (1981) for tracking solution branches of parameterized nonlinear systems (see the next article) is tested using several illustrative examples. One example is the standard Ramsey optimal growth model, for which analytical solutions can be obtained. For this example, the complete system is used to generate solutions c(rho) and k(rho) for the steady-state per-capita levels for consumption and capital as the time preference parameter rho varies from 0 to 0.50. Accuracy to four decimal places is obtained. This represents a stringent test of the method, since the derivative of k(rho) near rho=0 is on the order of -102 whereas the derivative of k(rho) near rho=0.50 is on the order of -100.

• Robert E. Kalaba and Leigh Tesfatsion (1981), "Complete Comparative Static Differential Equations" (pdf,753K), Nonlinear Analysis: Theory, Methods, and Applications 5(8) (August), pp. 821-833. The published article is available from Science Direct.
Abstract: This article develops a complete system of ordinary differential equations for tracking solution branches x(b) of a parameterized system of nonlinear equations F(x,b) = 0 along arbitrary user-designated paths for the parameter vector b. The complete system is obtained by augmenting the usual fundamental equation of comparative static analysis,
dx(b)/db = -Fx(x(b),b)-1Fb(x(b),b),
with ordinary differential equations for the Jacobian inverse. The feasibility and accuracy of the method are illustrated by several numerical examples.

Note: This complete differential system constitutes the core of the NASA Program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1].

### Tracking of Eigenvalues and Eigenvectors for General Parameterized Matrices

NOTE: All of the eigenvalue/eigenvector tracking algorithms below can be implemented via the NASA Program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1].

• Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1981a), "Variational Equations for the Eigenvalues and Eigenvectors of Nonsymmetric Matrices" (pdf,359K), Journal of Optimization Theory and Applications, Vol. 33, pp. 1-8. The published article is available from SpringerLink.
Abstract: This article develops a complete system of ordinary differential equations for tracking the eigenvalues and the right and left eigenvectors of nonsymmetric parameterized matrices over parameter intervals. A simpler reduced form of the ODE system is then derived for tracking the eigenvalues and eigenvectors of symmetric parameterized matrices. The feasibility and accuracy of the tracking method are illustrated by numerical examples.

• Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1981b), "Individual Tracking of an Eigenvalue and Eigenvector of a Parameterized Matrix" (pdf,207K), Nonlinear Analysis: Theory, Methods, and Applications Vol. 5(4), pp. 337-340. The published article is available from Science Direct.
Abstract: This article develops a complete system of ordinary differential equations for tracking a single eigenvalue of a parameterized matrix, together with one of its corresponding right or left eigenvectors, over parameter intervals. The feasibility and accuracy of the method are illustrated by numerical examples.

• Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1980), "A New Differential Equation Method for Finding the Perron Root of a Positive Matrix" (pdf,409K), Applied Mathematics and Computation Vol. 7(3), October, pp. 187-193. The published article is available from Science Direct.
Abstract: This article develops a complete system of ordinary differential equations for tracking the Frobenius-Perron root (largest eigenvalue) of a parameterized matrix, together with a unit-normalized right eigenvector, over parameter intervals. The feasibility and accuracy of the method are illustrated by numerical examples.

### Adaptive Homotopy Continuation

"Analysis does not owe its really significant increases of the last century to any mysterious use of the (imaginary number i) but to the quite natural circumstance that one has infinitely more freedom of mathematical movement if he lets quantities vary in a plane instead of only on a line." --- Leopold Kronecker.

• Robert E. Kalaba and Leigh Tesfatsion (1991), "Solving Nonlinear Equations By Adaptive Homotopy Continuation" (pdf,1MB), Applied Mathematics and Computation Vol. 41, No. 2:Part II (January), pp. 99-115. The published article is also available from Science Direct.
Abstract: This article introduces and constructively illustrates the concept of an adaptive homotopy for solving systems of nonlinear equations. Standard homotopy methods rely on a passive continuation parameter moving from 0 to 1 along the real line and are stymied if the homotopy Jacobian matrix becomes ill-conditioned along this path. In contrast, an adaptive homotopy replaces the passive continuation parameter by a "smart agent" that adaptively makes its way by trial and error from 0+0i to 1+0i in the complex plane C in accordance with certain specified objectives. The homotopy thus adapts to the physical problem at hand rather than requiring the user to reformulate his physical problem to conform to homotopy requirements. The adaptive homotopy algorithm designed and tested in the current study permits the continuation agent to adaptively traverse a "spider-web" grid in C centered about 1+0i in an attempt to achieve two objectives: (a) short continuation path from 0+0i to 1+0i; and (b) avoidance of regions where the homotopy Jacobian matrix becomes ill-conditioned.

Note: The specific adaptive homotopy algorithm developed in this study has been incorporated into the NASA Program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1]. Specifically, NASA uses this adaptive homotopy algorithm to generate automatically all needed initial conditions at F(x(b0),b0).

In a subsequent 1996 IEEE-TCS paper (pdf,1.5MB), using results from algebraic geometry, Denise M. Wolf and Seth R. Sanders explain with great clarity (using many concrete analytical and graphical illustrations) why homotopies H(x,lambda)=0 with complex-valued continuation parameters lambda are able to avoid folds and singular points while homotopies H(x,t)=0 with real-valued continuation parameters t cannot. Additionally, they explain the potential of a complex-parameter homotopy H(x,lambda)=0 for finding all solutions of an original system of interest F(x)=0 through a single continuation path for lambda through the complex plane (because real disjoint solution branches of irreducible analytic homotopies are connected surfaces in complex space). However, they do not address the substantial additional advantage of having lambda be an adaptive agent rather than a passive parameter moving along a pre-specified path, so that complex continuation paths avoiding folds and singular points can be adaptively determined in an efficient practical way.

One of the key ideas underlying adaptive homotopy is that the computational tool should adapt to the problem at hand rather than requiring the problem to be adapted to the tool. This same idea underlies the broader field of adaptive computation (or emergent computation) that grew out of work by Stephanie Forrest and other SFI/LANL researchers in the early 1990s.

### Automatic Evaluation of Higher-Order Partial Derivatives

• Leigh Tesfatsion (1991), "Automatic Evaluation of Higher-Order Partial Derivatives for Nonlocal Sensitivity Analysis", pp. 157-165 in A. Griewank and G. Corliss (eds.), Automatic Differentiation of Algorithms: Theory, Implementation, and Application, SIAM, Philadelphia.
Abstract: This proceedings paper surveys work on the FEED (Fast Efficient Evaluation of Derivatives) algorithm originally developed by Kalaba, Tesfatsion, and Wang (1983), with particular reference to the use of FEED for the implementation of nonlocal automated sensitivity techniques; see the articles below. The relationship of FEED to the automatic differentiation algorithms developed by other SIAM Workshop participants is clarified.

• Robert E. Kalaba, Thomas Plum, and Leigh Tesfatsion (1987), "Automation of Nested Matrix and Derivative Operations" (pdf,863K), Applied Mathematics and Computation Vol. 23(3), September, pp. 243-268. The published article is also available from Science Direct.
Abstract:In Kalaba, Tesfatsion, and Wang (1983) - see below -- an algorithm was developed for the exact forward-mode automatic evaluation of higher-order partial derivatives of functions of many variables using derivative arrays. This algorithm was supported by a library of "calculus subroutines" for many standard one-variable and two-variable functions. This article extends the FEED library to permit the automatic differentiation of expressions involving nested matrix and derivative operations. The need to differentiate such expressions arose naturally in the course of designing sequential filters for a class of nonlinear tracking problems.

• Robert Kalaba and Leigh Tesfatsion (1986b), "Automatic Differentiation of Functions of Derivatives" (pdf,671), Computers and Mathematics with Applications, Volume 12, Issue 11, Part 1, November, pp. 1091-1103. The published article is available from Science Direct.
Abstract: In Kalaba, Tesfatsion, and Wang (1983) -- see below -- the FEED algorithm was introduced for the exact forward-mode automatic evaluation of higher-order partial derivatives of functions of many variables using derivative arrays. This study demonstrates that FEED can be applied to a much broader class of functions than originally envisioned: namely, functions defined in terms of the derivatives of other functions. Such functions arise in many applications.

• Robert E. Kalaba, Leigh Tesfatsion , and Jone-Lin Wang (1983), "A Finite Algorithm for the Exact Evaluation of Higher Order Partial Derivatives of Functions of Many Variables" (pdf,514K), Journal of Mathematical Analysis and Applications 92 (1983), pp. 552-563. The published article is available from Science Direct.
Abstract: This article develops an algorithm for the exact forward-mode automatic evaluation of higher order partial derivatives using derivative arrays, now referred to as the FEED (Fast Efficient Evaluation of Derivatives) algorithm. Building on previous work by R. Wengert, the FEED algorithm proceeds by decomposing the evaluation of complicated functions of many variables into a sequence of simpler evaluations of special functions of one or two variables.

Note: A library of FEED calculus subroutines has been incorporated into the NASA program for the Nonlocal Automated Sensitivity Analysis of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b0,b1]. Specifically, NASA uses FEED in order to generate all needed derivative evaluations automatically. 