Adaptive Computation Methods
for Nonlinear Systems
Last Updated: 6 January 2016
 Site Maintained By:

Leigh Tesfatsion
 Professor of Econ, Math, and ECpE
 Department of Economics
 Iowa State University
 Ames, IA 500111070
 FAX: (515) 2940221
 http://www2.econ.iastate.edu/tesfatsi/

tesfatsi AT iastate.edu

Table of Contents:


Ideally, analysis tools should adapt to the problem at hand rather than requiring the problem to be adapted to the tools.
In a series of studies with Robert Kalaba and other collaborators, gathered together in the
final section,
below, three adapative computation techniques are developed to facilitate the direct analysis of realworld nonlinear systems in whatever form they actually appear:
 Nonlocal comparative static analysis for tracking solutions of parameterized nonlinear systems along parameter paths;
 Automatic higherorder partial derivative evaluation through forwardmode derivative arrays;
 Adaptive homotopy continuation methods with the continuation parameter modeled as an intelligent agent moving from 0+0i to 1+0i through the complex plane along an adaptively chosen path.

In economic theory we are often interested in understanding how the solution
vector x(b) of a parameterized system of equations F(x,b)=0 varies as the
parameter vector b ranges over some interval of interest. The wellknown
fundamental differential equation of comparative statics,
dx(b)/db = F_{x}(x(b),b)^{1}F_{b}(x(b),b),
can be used to determine the nature of the change in the solution vector x(b)
at a particular value b_{0} for the parameter vector b. However,
this fundamental differential equation is analytically incomplete.
 More precisely, the integration of this fundamental differential equation
from initial conditions over a parameter interval would typically
require the additional algebraic determination of the Jacobian inverse at
each step in the integration process. For this reason many economists
interpret the fundamental differential equation as a purely algebraic
relation that can be used to qualitatively sign the change in the solution at
the particular parameter value. Numerical integration using specific
functional forms is generally not attempted.
 In Kalaba and Tesfatsion (1981) the fundamental differential
equation is completed by incorporating ordinary differential equations for
the Jacobian inverse. As illustrated for optimal growth and taxation
problems in Kalaba, Tesfatsion, and Wang (1981), this analytically
complete system of ordinary differential equations can be used to track
solutions for systems of parameterized nonlinear equations over parameter
intervals. Thus, this complete differential system facilitates the
nonlocal comparative static analysis of economic systems.
 For equilibrium and optimal growth studies, it is often important to be
able to determine the stability and definiteness properties of parameterized
Jacobian matrices. These properties can be determined from a knowledge of
the eigenvalues of the Jacobian matrix. The complete differential system
developed in Kalaba and Tesfatsion (1981) for the nonlocal sensitivity
analysis of general nonlinear systems turns out to be useful, as well, for
the sensitivity study of parameterized matrices and their associated systems
of eigenvalues and eigenvectors.

Specifically, in Kalaba, Spingarn, and Tesfatsion (1981a) it is shown
how the eigenvalues and the right and left eigenvectors of a general
parameterized matrix can be tracked over parameter intervals by integrating
an analytically complete differential system from initial conditions. In
Kalaba, Spingarn, and Tesfatsion (1981b) an analogous complete
differential system is developed for tracking a single eigenvalue of a
parameterized matrix together with one of its corresponding right or left
eigenvectors. Finally, in Kalaba, Spingarn, and Tesfatsion (1980) it
is shown show how the latter system can be used, in particular, to obtain the
largest eigenvalue (FrobeniusPerron root) for any positive square matrix, an
important problem arising (e.g.) in the study of Leontief inputoutput
systems.

In the course of implementing the complete differential system developed in
Kalaba and Tesfatsion (1981) in various applications, it became clear that an
automatic method was needed for evaluating higherorder partial derivatives in order to
make the approach practical. In Kalaba, Tesfatsion, and Wang (1983)
an algorithm FEED (Fast Efficient Evaluation of Derivatives) is
developed for the automatic evaluation of higherorder partial derivatives
exact up to roundoff and truncation error. FEED overcomes several key
difficulties with previously developed automatic differentiation techniques
by using forwardmode evaluation based on derivative arrays. Further
potentially useful extensions of FEED are developed in Kalaba and
Tesfatsion (1986b) and in Kalaba, Plum, and Tesfatsion (1987).
 The initial conditions needed to integrate the complete differential
system developed in Kalaba and Tesfatsion (1981) for a parameterized nonlinear
system F(x,b)=0 from any desired starting point b_{0} for the parameter vector b are as follows: the solution vector
x(b_{0}) at b_{0}, together with evaluations for the adjoint and determinant of the Jacobian matrix F_{x}(x(b),b) at
(x(b_{0}),b_{0}). For many nonlinear problems, finding these needed initial values is a difficult matter in and of itself.
 In Kalaba and Tesfatsion (1991) an adaptive homotopy is
developed for solving nonlinear systems of equations F(x)=0 that adapts to
whatever physical problem is at hand. The key new idea is the replacement of
the standard homotopy continuation parameter moving from 0 to 1 along the
real line with a "smart agent" able to adaptively determine a solution path
from 0+0i to 1+0i in the complex plane using multicriteria optimization.
Specifically, the agent sequentially traces out a path from 0+0i to 1+0i in
accordance with the following two criteria: (a) efficiency (take as
few steps as possible); and (b) accuracy (avoid regions where
calculations become illconditioned). It is therefore not necessary for the
user of the homotopy to know in advance the location of folds and
bifurcation points; the continuation agent applies local stepbystep testing
to efficiently track around such regions in the complex plane.

Finally, in Kalaba and Tesfatsion (1990) a Fortran program NASA
(Nonlocal Automated Sensitivity Analysis) is developed for solving the
complete differential system in Kalaba and Tesfatsion (1981) that knits
together all of these various parts. The NASA program incorporates the FEED
algorithm developed in Kalaba, Tesfatsion, and Wang (1983) for the automatic
evaluation of higherorder partial derivatives. The NASA program also
incorporates the adaptive homotopy developed in Kalaba and Tesfatsion (1991)
as the means for obtaining all required initial conditions.
 The
next section
explains where the Fortran source code for the NASA program can be downloaded as free opensource software. All of the work summarized above is explained at some length in the
articles listed in the
final section.
 NOTE: It is with immense sadness I report that Robert E. Kalaba, a
great scholar, mentor, colleague, and friend, died on September 29, 2004.
Software and Manual Availability
 NASA Progam:
 The NASA (Nonlocal Automated Sensitivity Analysis) Program
(html,99K)
is a Fortran program that incorporates automated procedures for tracking the
solution branches x(b) for nonlinear parameterized systems F(x,b)=0 along
userdesignated paths for the parameter vector b; see the articles in the
previous section. The NASA program is released here as free opensource software by the
developers (Robert Kalaba and Leigh Tesfatsion) under the terms of the
Artistic License Agreement.
 NASA Program Manual:
 Robert Kalaba and Leigh Tesfatsion (1990),
"Nonlocal Automated Sensitivity Analysis"
(pdf,1M),
Computers and Mathematics with Applications, Vol. 20, No. 2, pp. 5365.
Research Publications
Nonlocal Automated Sensitivity Analysis (NASA):
Tracking of Solutions for General Parameterized Nonlinear systems
 Leigh Tesfatsion (2001), "Nonlocal Sensitivity Analysis with Automatic Differentiation"
[
(ps,249K),
(pdf,114K)],
C. A. Floudas and P. M. Pardalos (eds.), Encyclopedia of Optimization,
Kluwer Academic Publishers, Volume 4, pp. 9297
 Abstract: This encyclopedia entry is a summary report
on the design and use of the
NASA program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b_{0},b_{1}].
 Leigh Tesfatsion (1992), "Nonlocal Automated Comparative Static
Analysis", Computational
Economics 5(4) (November), pp. 313331. The published article is
available from
SpringerLink.
 Abstract: This paper reviews work on the development of the
NASA program
for the automated comparative static analysis of parameterized nonlinear systems over
parameter intervals. NASA
incorporates a fast and efficient algorithm Feed for the automatic
evaluation of higherorder partial derivatives, as well as an adaptive
homotopy continuation algorithm for obtaining all required initial
conditions. Applications are envisioned for fields such as economics where
models tend to be complex and closedform solutions are difficult to obtain.
 Robert E. Kalaba and Leigh Tesfatsion (1990), "Nonlocal Automated Sensitivity Analysis"
(pdf,1M),
Computers and Mathematics with Applications Volume 20, Issue 2, pp. 5365. The published article
is available from
Science Direct.
 Abstract: This article presents and illustrates the
applicability of the
NASA program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b_{0},b_{1}]. The NASA program incorporates automated procedures for initialization, derivative evaluation, and the
tracking of solution branches x(b) along userdesignated paths for the parameter vector b.
 Robert E. Kalaba and Leigh Tesfatsion (1986a), "Nonlocal Sensitivity Analysis,
Automatic Derivative Evaluation, and Sequential Nonlinear Estimation"
(pdf,1.4M),
Computational
Statistics and Data Analysis 4(2) (July), pp. 7981. The published
article is available from
Science Direct.
 Abstract: This article summarizes work by the authors on
computational methods for nonlinear systems. Section 2 develops a complete
set of ordinary differential equations for generating solutions to
parameterized systems of nonlinear equations over parameter intervals of
interest. Section 3 presents a simple finite algorithm, FEED, for the
exact forwardmode automatic evaluation of higherorder partial derivatives
using derivative arrays. Section 4 obtains an exact sequential
characterization of the solution to a general nonlinear leastsquares
estimation problem as the duration of the process increases and additional
observations are made.
 Robert E. Kalaba, Leigh Tesfatsion, and JoneLin Wang (1981),
"Local and Nonlocal Comparative Static Analysis of Economic Systems"
(pdf,382K),
Applied Mathematics
and Computation Vol. 9, pp. 227234. The published article is
available from
Science Direct.
 Abstract: The complete system of ordinary differential
equations developed by Kalaba and Tesfatsion (1981) for tracking
solution branches of parameterized nonlinear systems (see the next article)
is tested using several illustrative examples. One example is the standard
Ramsey optimal growth model, for which analytical solutions can be obtained.
For this example, the complete system is used to generate solutions c(rho)
and k(rho) for the steadystate percapita levels for consumption and capital
as the time preference parameter rho varies from 0 to 0.50. Accuracy to four
decimal places is obtained. This represents a stringent test of the method,
since the derivative of k(rho) near rho=0 is on the order of 10^{2}
whereas the derivative of k(rho) near rho=0.50 is on the order of
10^{0}.
 Robert E. Kalaba and Leigh Tesfatsion (1981),
"Complete Comparative Static Differential Equations"
(pdf,753K),
Nonlinear Analysis: Theory, Methods, and Applications 5(8)
(August), pp. 821833. The published article is available from
Science Direct.
 Abstract: This article develops a complete system of
ordinary differential equations for tracking solution branches x(b) of a
parameterized system of nonlinear equations F(x,b) = 0 along arbitrary
userdesignated paths for the parameter vector b. The complete system is
obtained by augmenting the usual fundamental equation of comparative static
analysis,
dx(b)/db = F_{x}(x(b),b)^{1}F_{b}(x(b),b),
with ordinary differential equations for the Jacobian inverse. The
feasibility and accuracy of the method are illustrated by several
numerical examples.
 Note: This complete differential system constitutes the core
of the
NASA Program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b_{0},b_{1}].
Tracking of Eigenvalues and Eigenvectors for General Parameterized Matrices
NOTE: All of the eigenvalue/eigenvector tracking algorithms below can be implemented via the
NASA Program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b_{0},b_{1}].
 Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1981a),
"Variational Equations for the Eigenvalues and Eigenvectors of
Nonsymmetric Matrices"
(pdf,359K),
Journal of
Optimization Theory and Applications, Vol. 33, pp. 18.
The published article is available from
SpringerLink.
 Abstract: This article develops a complete system of
ordinary differential equations for tracking the eigenvalues and the
right and left eigenvectors of nonsymmetric parameterized matrices over parameter
intervals. A simpler reduced form of the ODE system is then derived for tracking the eigenvalues
and eigenvectors of symmetric parameterized matrices. The feasibility and accuracy of the
tracking method are illustrated by numerical examples.
 Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1981b), "Individual
Tracking of an Eigenvalue and Eigenvector of a Parameterized Matrix"
(pdf,207K),
Nonlinear
Analysis: Theory, Methods, and Applications Vol. 5(4), pp. 337340.
The published article is available from
Science Direct.
 Abstract: This article develops a complete system of
ordinary differential equations for tracking a single eigenvalue of a
parameterized matrix, together with one of its corresponding right or left
eigenvectors, over parameter intervals. The feasibility and accuracy of the
method are illustrated by numerical examples.
 Robert E. Kalaba, Karl Spingarn, and Leigh Tesfatsion (1980),
"A New Differential Equation
Method for Finding the Perron Root of a Positive Matrix"
(pdf,409K),
Applied Mathematics
and Computation Vol. 7(3), October, pp. 187193. The published article
is available from
Science Direct.
 Abstract: This article develops a complete system of
ordinary differential equations for tracking the FrobeniusPerron root
(largest eigenvalue) of a parameterized matrix, together with a
unitnormalized right eigenvector, over parameter intervals. The feasibility
and accuracy of the method are illustrated by numerical examples.
Adaptive Homotopy Continuation

"Analysis does not owe its really significant increases of the last century to any mysterious use of the (imaginary number i) but to the quite natural circumstance that one has infinitely more freedom of mathematical movement if he lets quantities vary in a plane instead of only on a line."  Leopold Kronecker.
 Robert E. Kalaba and Leigh Tesfatsion (1991), "Solving
Nonlinear Equations By Adaptive Homotopy
Continuation"
(pdf,1MB),
Applied Mathematics and
Computation Vol. 41, No. 2:Part II (January), pp. 99115.
The published article is also available from
Science Direct.
 Abstract: This article introduces and constructively
illustrates the concept of an adaptive homotopy for solving systems of
nonlinear equations. Standard homotopy methods rely on a passive
continuation parameter moving from 0 to 1 along the real line and are stymied
if the homotopy Jacobian matrix becomes illconditioned along this path. In
contrast, an adaptive homotopy replaces the passive continuation parameter by
a "smart agent" that adaptively makes its way by trial and error from 0+0i to
1+0i in the complex plane C in accordance with certain specified objectives.
The homotopy thus adapts to the physical problem at hand rather than
requiring the user to reformulate his physical problem to conform to homotopy
requirements. The adaptive homotopy algorithm designed and tested in the
current study permits the continuation agent to adaptively traverse a "spiderweb" grid in C
centered about 1+0i in an attempt to achieve two objectives:
(a) short continuation path from 0+0i to 1+0i; and
(b) avoidance of regions where the homotopy Jacobian matrix becomes
illconditioned.
 Note: The specific adaptive homotopy algorithm developed in
this study has been incorporated into the
NASA Program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter intervals [b_{0},b_{1}]. Specifically,
NASA uses this adaptive homotopy algorithm to generate automatically all needed initial conditions at F(x(b_{0}),b_{0}).
 Additional Remarks on Adaptive Homotopy:
 In a subsequent 1996 IEEETCS paper
(pdf,1.5MB),
using results from algebraic geometry, Denise M. Wolf and Seth R. Sanders
explain with great clarity (using many concrete analytical and graphical
illustrations) why homotopies H(x,lambda)=0 with complexvalued continuation parameters lambda
are able to avoid folds and singular points while homotopies H(x,t)=0 with realvalued
continuation parameters t cannot. Additionally,
they explain the potential of a complexparameter homotopy H(x,lambda)=0 for
finding all solutions of an original system of interest F(x)=0 through
a single continuation path for lambda through the complex plane (because real
disjoint solution branches of irreducible analytic homotopies are connected
surfaces in complex space). However, they do not address the substantial
additional advantage of having lambda be an adaptive agent rather than a
passive parameter moving along a prespecified path, so that complex
continuation paths avoiding folds and singular points can be adaptively
determined in an efficient practical way.
 One of the key ideas underlying adaptive homotopy is that the
computational tool should adapt to the problem at hand rather than requiring
the problem to be adapted to the tool. This same idea underlies the
broader field of adaptive computation (or emergent computation)
that grew out of work by
Stephanie Forrest
and other SFI/LANL researchers in the early 1990s.
Automatic Evaluation of HigherOrder Partial Derivatives
 Leigh Tesfatsion (1991), "Automatic Evaluation of
HigherOrder Partial Derivatives for Nonlocal
Sensitivity Analysis", pp. 157165 in A. Griewank and G. Corliss (eds.),
Automatic Differentiation of Algorithms: Theory, Implementation, and
Application, SIAM, Philadelphia.
 Abstract: This proceedings paper surveys work
on the FEED (Fast Efficient Evaluation of Derivatives) algorithm originally
developed by Kalaba, Tesfatsion, and Wang (1983), with particular reference
to the use of FEED for the implementation of nonlocal automated sensitivity
techniques; see the articles below. The relationship of FEED to the automatic
differentiation algorithms developed by other SIAM Workshop participants is
clarified.
 Robert E. Kalaba, Thomas Plum, and Leigh Tesfatsion (1987),
"Automation of Nested Matrix and Derivative Operations"
(pdf,863K),
Applied Mathematics and Computation
Vol. 23(3), September, pp. 243268.
The published article is also available from
Science Direct.
 Abstract:In Kalaba, Tesfatsion, and Wang (1983)  see
below  an algorithm was developed for the exact forwardmode automatic
evaluation of higherorder partial derivatives of functions of many variables
using derivative arrays. This algorithm was supported by a library of
"calculus subroutines" for many standard onevariable and twovariable
functions. This article extends the FEED library to permit the automatic
differentiation of expressions involving nested matrix and derivative
operations. The need to differentiate such expressions arose naturally in
the course of designing sequential filters for a class of nonlinear tracking
problems.
 Robert Kalaba and Leigh Tesfatsion (1986b), "Automatic Differentiation of
Functions of Derivatives"
(pdf,671),
Computers and Mathematics with Applications, Volume
12, Issue 11, Part 1, November, pp. 10911103. The published article is
available from
Science Direct.
 Abstract: In Kalaba, Tesfatsion, and Wang (1983)  see below  the FEED algorithm was introduced for the exact forwardmode
automatic evaluation of higherorder partial derivatives of functions of many
variables using derivative arrays. This study demonstrates that FEED can be
applied to a much broader class of functions than originally envisioned:
namely, functions defined in terms of the derivatives of other functions.
Such functions arise in many applications.
 Robert E. Kalaba, Leigh Tesfatsion , and JoneLin Wang (1983),
"A Finite Algorithm for the Exact Evaluation of Higher Order Partial
Derivatives of Functions of Many Variables"
(pdf,514K),
Journal of Mathematical Analysis
and Applications 92 (1983), pp. 552563. The published article is available from
Science Direct.
 Abstract: This article develops an algorithm for the
exact forwardmode automatic evaluation of higher order partial derivatives
using derivative arrays, now referred to as the FEED (Fast Efficient
Evaluation of Derivatives) algorithm. Building on previous work by R.
Wengert, the FEED algorithm proceeds by decomposing the evaluation of
complicated functions of many variables into a sequence of simpler
evaluations of special functions of one or two variables.
 Note: A library of FEED calculus subroutines has been
incorporated into the
NASA program
for the Nonlocal Automated Sensitivity Analysis
of parameterized nonlinear systems F(x,b)=0 over parameter
intervals [b_{0},b_{1}]. Specifically, NASA uses
FEED in order to generate all needed derivative evaluations automatically.
Copyright © Leigh Tesfatsion. All Rights Reserved.