Introductory Notes on
Complex Adaptive Systems and
Agent-Based Computational Economics

Last Updated: 16 January 2008

Site Maintained By:
Professor Leigh Tesfatsion
tesfatsi AT iastate.edu

Syllabus for Econ 308

Basic References:

* David F. Batten, Chaper 1:"Chance and Necessity" (pdf preprint-no figures,247K), in Discovering Artificial Economics: How Agents Learn and Economies Evolve, Westview Press, Boulder, Colorado, 2000.
Note: Unfortunately this book is now out of print. However, if you are willing and able to handle a rather large download, the entire Batten book (figures included) in pdf can be accessed at here (pdf,17MB).

Thomas Schelling, Introduction, Chapter 1 (pages 9-43) from Micromotives and Macrobehavior, W. W. Norton and Company, New York, 1978.

Nathan Winslow, "Introduction to Self-Organized Criticality and Earthquakes" (html,12 pages), discussion paper, Department of Geological Sciences, University of Michigan, 1997. ON-LINE

Key Issues

**IMPORTANT POINT:** Agent-based Computational Economics (ACE) is not dependent on the existence of power laws, punctuated equilibrium, increasing returns, or any other particular feature discussed below that researchers have hypothesized might characterize complex adaptive systems in general or economic processes in particular. Rather, as will be seen in Part I.B of the course, ACE is a methodology that can be used to carry out controlled replicable experiments to test such hypotheses.

1. Is economics harder than physics? (Batten, pp. 2-9)

Physics is not as lawful as it appears -- physical reality is observer created. Doubts about the existence of a unique, observer-independent reality. (pp. 2-3)

LT NOTE: Andy Clark explicitly refuses to push his comprehensive conception of the mind to this limit, which some have argued is the logical limit of his theory.

Many physicists now agree that many fundamental processes shaping our natural world are stochastic and irreversible. Physics is becoming more historical and generative. (p. 4)

But unlike physics, economics has hardly changed at all...its central dogma still revolves around stable equilibrium principles. Students of economics are taught to believe that prices will converge to a level where supply equals demand much as water, flowing between two containers, finally comes to rest at a common level. More markets/agents = more tanks of water connected together. In physics, this kind of treatment is referred to as "mean field approximation." Mean field theories do not work well for systems that are subject to diversity and change. (pp. 4-5)

"The point of departure for this book, in fact, is that our economic world is heterogeneous and dynamic, not homogeneous and static. It is full of pattern and process. Development unfolds..." (p. 7)

"Our world is pluralistic because two "strange bedfellows" are at work together: chance and necessity." (p. 8)

LT NOTE: I would say that the world is pluralistic because three "strange bedfellows" have worked together in intricate combination over time: chance, necessity, and design. That is, human beings engage in deliberate attempts to design aspects of their external environments. They are not purely subject to the whims of chance and necessity.

"The interesting thing is that seemingly simple interactions between individual agents can accumulate to a critical level, precipitating unexpected change. What's even more surprising is that some of this change can produce patterns displaying impressive order." (p. 9)

2. Sand Piles and Self-Organized Criticality (Batten, pp. 10-12, 19-22)

Batten uses Per Bak's sand pile model to illustrate the difference between weakly and strongly interactive systems, and to motivate the idea of "self-organized criticality" (SOC).

Batten's discussion is too terse to provide a clear understanding of this interesting (and controversial) model. Nathan Winslow (Geological Sciences, University of Michigan) provides a clearer discussion of the SOC concept in general, and Per Bak's sand pile model in particular, in a relatively nontechnical 1997 paper titled Introduction to Self-Organized Criticality and Earthquakes. The Winslow paper is recommended for those wishing to pursue this topic further. The brief description, below, relies heavily on Winslow's paper.

Bak's conception of sand piles as systems exhibiting SOC can be intuitively explained as follows. When you first start building a sand pile on a tabletop of finite size, the system is weakly interactive. Sand grains drizzled from above onto the center of the sand pile have little effect on sand grains toward the edges. However, as you keep drizzling sand grains onto the center, a small number at a time, eventually the slope of the sand pile "self organizes" to a critical state where breakdowns of all different sizes are possible in response to further drizzlings of sand grains and the sand pile cannot grow any larger in a sustainable way. Bak refers to this critical state as a state of self-organized criticality (SOC), since the sand grains on the surface of the sand pile have self-organized to a point where they are just barely stable.

What does it mean to say that "breakdowns of all different sizes" can happen at the SOC state?

Starting in this SOC state, the addition of one more grain can result in an "avalanche" or "sand slide," i.e., a cascade of sand down the edges of the sand pile and (possibly) off the edges of the table. The size of this avalanche can range from one grain to catastrophic collapses involving large portions of the sand pile. The size distribution of these avalanches follows a power law over any specified period of time T. That is, the frequency of a given size of avalanche is inversely proportional to some power of its size, so that big avalanches are rare and small avalanches are frequent. For example, over 24 hours you might observe one avalanche involving 1000 sand grains, 10 avalanches involving 100 sand grains, and 100 avalanches involving 10 sand grains. This is consistent with a power law having form

(*) N = KC-s = K/Cs

where N = number of avalanches, K = 1000, C = number of sand grains involved in the avalanche, and s = 1. (See the Glossary of Basic Concepts at the end of these notes for a more detailed discussion of power laws.)

At the SOC state, then, the sand grains at the center must somehow be capable of transmitting disturbances to sand grains at the edges, implying that the system has become strongly interactive. The dynamics of the sand pile thus transit from being purely local to being global in nature as more and more grains of sand are added to the sand pile. Batten refers to this phenomenon as "emergent dynamics."

Winslow (1997) describes how Bak's conception of sand pile dynamics can be approximated on a computer. A sand pile on a tabletop can be modelled as a two-dimensional "cellular automaton" (checker-board grid) in which each cell (checker-board square) keeps numerical track of the "average slope (gradient)" G of the sand pile in that cell as successive sand grains are added to the sand pile. Starting from some distribution of G values across the entire automaton (e.g., all G values set to 0), a cell is initially chosen at random and its G value is increased by one. If the resulting G value exceeds some critical value (user-specified to be greater than or equal to 3), then this value of G is decreased by 4 and the values of G in the north, south, east, and west neighboring cells of this cell are each increased by 1. This process captures in stylized fashion the way in which the slope of a sand pile may become too steep at some point, causing sand grains to roll down to nearby points. If this redistribution of G values results in a G value in a neighboring cell that exceeds its critical value, then another redistribution occurs. Otherwise, another cell is chosen at random, its G value is increased by 1, and the process repeats. Winslow shows (Figures 2 and 3) that a log-log plot of the avalanche size C versus the frequency of occurrence N of avalanches of size C obeys a power law distribution having form (*) above, where C is the number of cells whose G values are changed as a result of the avalanche.

On the other hand, Winslow (1997) also discusses the difficulties that experimenters have had in trying to get actual sand piles to behave in the idealized way captured in Per Bak's theory and implemented through simple computer models. Winslow cites interesting attempts by Nagel (1992) and Bretz (1992) to conduct experiments with real sand piles. These researchers were unable to obtain SOC results with actual sand piles unless the experimental conditions were rather delicately tuned, leading Winslow to question whether actual sand piles can legitimately be said to have self-organizing critical states even when critical slope values are found. The micro-level interactions among sand grains apparently involve effects not considered in Bak's idealized macro sand pile model, effects that -- in the absence of suitable controls -- can prevent the appearance of avalanches obeying a power law distribution.

For the purposes of our course, the bottom-line question that Batten raises in Chapter 1 (and takes up again in Chapter 7) regarding SOC and power law distributions is whether these ideas are useful conceptions for economics. As noted above, the ACE methodology is entirely independent of any particular conception of agent interactions, and most certainly does not rely on the existence of SOC/power laws in economic systems. The important point is that a number of economists (e.g., Rob Axtell of the Brookings Institution and Paul Krugman of MIT) are currently exploring the power law characteristics of various economic processes (e.g., firm formation, urban growth), and ACE provides a way to test their power law claims through controlled experiments.

3. Thomas Schelling's City Segregation Model (Batten, pp. 12-19)

Schelling's famous city segregation model (also sometimes referred to as a "tipping model") illustrates how a highly integrated city can rapidly shift to being highly segregated in response to a local disturbance even when everyone is fairly tolerant regarding their type of neighbors.

In Schelling's city segregation model there are two classes of agents. The agents live in a two-dimensional square "chessboard" city consisting of sixty-four squares, to be interpreted as a symmetrical grid of house locations. Each agent cares about the class of his immediate neighbors, i.e., the occupants of the abutting squares of the chessboard. Each agent has a maximum of eight possible neighbors, the exact number depending on the agent's position on the chessboard (straight edge, corner, or interior). Each agent has a "happiness rule" determining whether he is happy or not at his current house location. If unhappy, he either seeks an open square where his happiness rule can be satisfied or he exits the city altogether.

Example of a Happiness Rule: An agent with only one neighbor will try to move if the neighbor is of a different class than his own; an agent with two neighbors will try to move unless at least one neighbor is of the same class as his own; an agent with from three to five neighbors will try to move unless two neighbors are of the same class as his own; and an agent with from six to eight neighbors will try to move unless at least three neighbors are of the same class as his own.

The exact degree of segregation that emerges in the city depends strongly on the specification of the agents' happiness rules. Batten notes that, under some rule specifications, Schelling's city can transit from a highly integrated state to a highly segregated state in response to a small local disturbance. For example, if some agent decides to exit the city altogether, his neighbors might become discontented with their current locations and try to move. These efforts can lead to a chain reaction in which increasing numbers of agents become discontented and attempt to move. Thus, subject to some small initial disturbance (e.g., the exit of a few agents from the city), a city initially in a highly integrated state can "tip" to a highly segregated state.

TECHNICAL NOTE: Schelling's model, like Bak's sand pile model, can be implemented on the computer as a two-dimensional "cellular automaton." Cellular automata as a form of interaction model will be more carefully discussed in a later part of the course.

4. Do power laws have predictive power? (Batten, pp. 19-22)

Batten's discussion here is very informal. He suggests (without theoretical proof or constructive demonstration) that the size distribution of avalanches in Per Bak's sand pile model and the size distribution of chain reactions in Schelling's segregated city model display power law properties. In Figure 1.7 (p. 21), Batten provides a graph of what a power law distribution for Bak's sand pile model might look like, with the "size class" of avalanches appearing as the independent variable c on the horizontal axis and the "number of avalanches of size class c" appearing as the dependent variable n on the vertical axis, both variables measured in logs.

Batten goes on to state (p. 22): "Although it's too early to say for sure, it's likely that many dynamic phenomena discussed in this book obey power laws." He takes up scale invariance and power laws again in chapters 2 and 5.

Power laws are now a controversial topic. So many researchers have claimed to find power law properties in so many different systems that one wonders what the substantive content of such claims really is. This could be an interesting (challenging) topic for a student project.

5. Punctuated equilibrium or punctuated change? (Batten, pp. 22-24)

"Punctuated equilibrium" is another term that tends to be loosely applied to many complex adaptive systems. It would be good to have a more precise definition to avoid ambiguities, preferably one that does not make use of the term "equilibrium."

For example, in my ACE labor market experiments, and in my experiments with iterated prisoner's dilemma with choice and refusal of partners, I have witnessed numerous runs in which the average fitness levels of the agent types undergo long periods of stasis interrupted by periods of sudden dramatic change. On the other hand, other aspects of these agents (e.g., their strategies, their interaction partners) are continually evolving -- there is no stasis. In what sense, then, can the system be said to be in an "equilibrium state" in the supposed "stasis" periods in which average fitness levels are approximately constant? The answer clearly depends on the precise definition given for a system's "state."

In these pages, Batten notes that a punctuated equilibrium pattern appears to characterize a wide variety of systems, including: Per Bak's sand pile model; Tom Ray's Tierra (a continuously evolving computer world of digital hosts and parasites); the real-world process of scientific advance (as described by Thomas Kuhn), real-world economies (as described, for example, by the Austrian economist Joseph Schumpeter); and some of the most beloved pieces of classical music. This interesting hypothesis warrants a much more careful scrutiny.

6. Bulls, Bears, and Fractals (Batten, pp. 24-29)

Batten questions the "efficient markets hypothesis," according to which stock market traders process new information so efficiently that stock share prices instantly change to reflect all new information. Consequently, the current price of a stock share is the best possible predictor of its true ("fundamental") value. If the efficient markets hypothesis were true, technical trading (trying to use patterns in the historical record of past stock market prices to predict future stock market prices) would be a complete waste of time.

The puzzle is that many real-world stock market traders swear by technical trading. They believe that patterns in past stock market prices can be important predictors of future price trends. Many also believe that "nonfundamental" phenomena such as market psychology, fads, and herd (bandwagon) effects -- that is, phenomena having nothing to do with the actual financial soundness and profitability (hence dividend payouts) of firms -- can significantly influence the movement of stock share prices.

Batten notes, correctly, that the evidence regarding the efficiency of stock markets is mixed. Stock markets seem to be reasonably efficient. On average, traders have a hard time beating the return rate on a large diversified "stock market portfolio" such as the Standard and Poor 500 Index. [The latter index incorporates 500 stocks drawn from various industries, and accounts for more than 80 percent of the market value of all stocks listed on the New York Stock Exchange.]

On the other hand, trade volume and volatility data appear to be at odds with the purely "fundamentalist" explanation of stock share pricing implied by the efficient markets hypothesis: namely, that the current price of a stock share should equal the present value of its future expected dividend stream. Moreover, some researchers have concluded that financial price data exhibit definite types of regularities, including patterns of self-similarity on different time scales and conformity to power laws.

Attempting to understand these puzzling "stylized facts" about financial markets is now a hot topic for ACE researchers. For pointers to some of this work, see ACE Research Area: Financial Market Issues.

One important conclusion that Batten draws from his discussion in this section is that complex patterns in financial data, and in economic data in general, are created by long periods of evolution. He cautions (p. 28) that such patterns cannot be understood "by studying economic change within a time frame that is short compared with the economy's overall evolution."

Another important lesson he draws (p. 29) is that "wherever contingency is pervasive, detailed long-term prediction becomes impossible. For example, many kinds of economic changes are unpredictable. But that very fact doesn't mean that they're also unexplainable." He concludes that "the main problem with understanding our economic world is that we have no reliable benchmarks with which to compare it." By this he appears to mean that, for the real world, we can't re-run the tape.

Batten takes up these issues again in Chapter 7 of his book, a chapter which focuses on coevolving markets.

6. Stasis and Morphogenesis (Batten, pp. 29-39)

Batten notes, correctly, that conventional economic theory tends to focus on economic models in which only negative feedback loops are operative and diminishing returns to scale prevails. Such models tend to have well defined equilibria that are stable, predictable, and resistant to change. Thus, such models predict stasis as the ultimate economic outcome.

Batten argues, to the contrary, that real world economies are characterized by positive feedback loops and increasing returns to scale. An important point he stresses is that increasing returns prevent an economy from returning to its original state. Thus, economies governed by increasing returns undergo path-dependent structural change, which Batten refers to as morphogenesis. They can also exhibit "lock in" effects, whereby firms gain an advantage over their rivals simply by being the first to produce a certain good or service.

Although economists generally recognize that feedback loops in real-world economies can be positive and that increasing returns are possible, models incorporating increasing returns tend to be analytically intractable, hence difficult to work with. Moreover, some economists disagree with Batten regarding the empirical importance of increasing returns and lock-in effects. The latter issue will be taken up more carefully in later Batten chapters. It has also recently gained attention in the popular press due to the prominent role it has played in the Microsoft Anti-Trust case. For pointers to resources related to this issue, see ACE Research Area: Networks, Path-dependence, and Lock-In Effects.

7. Learning, Evolution, and Coevolution (Batten, pp. 39-43)

Based on work by famous researchers such as Ilya Prigogine and Joseph Schumpeter, Batten formulates the following important conjectures (pp. 40-41): (1) "Morphogenesis and disequilibrium are more influential states in an evolving economy than stasis and equilibrium;" (2) "(S)elf-organizing human systems possess an evolutionary drive that selects for populations with an ability to learn, rather than for populations exhibiting optimal behavior;" and (3) "Learning isn't just evolutionary; it's coevolutionary." Much of the rest of his book is devoted to the justification of these claims.

Glossary of Basic Concepts

Complex System:
There is no one accepted definition of a complex system in either the natural sciences or the information sciences. However, as discussed at length in an interesting essay by Sunny Y. Auyang titled "Synthetic Analysis of Complex Systems I - Theories", the definitions proposed by various researchers share some common elements. They postulate systems whose structures are generated by the combination of many interacting constituents. These constituents can have a rather simple structure, and the rules governing their interactions can be rather simple in form as well. However, because of the large number of constituents, the overall interaction patterns generated among the constituents can exhibit intricate persistent regularities that appear to be impossible to predict directly from the structure of the constituents and the form of their interaction rules. In short, complex systems are charactized by "strong interaction effects" -- see below for further discussion of this concept.

Complex Adaptive System:
This is another concept that has no one accepted definition. Here is a range of possible definitions that have been used in the literature: A complex adaptive system (CAS) is a complex system that contains at least some units which are…

Physics (p. 3):
Science of matter and energy, and their interactions

Equilibrium State (traditional math/physics definition) (p. 3):
A "rest point" where all motion in the state of a system ceases.

Examples:

  1. Let t = time, x(t) = a real number describing the state of a system at time t, and dx(t)/dt = the rate of change in the system state at time t (i.e., the "derivative of x(t) with respect to t") Suppose dx(t)/dt = f(x(t)), where

    (1) f(x(t)) = - x(t)

    Then dx(t)/dt = 0 for x(t) = 0, hence 0 is a rest point (equilibrium state) for system (1).

  2. Alternatively, suppose dx(t)/dt = f(x(t)), where

    (2) f(x(t)) = x(t)2

    Then dx(t)/dt = 0 for x(t) = 0, hence 0 is a rest point (equilibrium state) for system (2).

Mean-field approximation (p. 5):
Roughly, mean-field approximation means that you assume that each constituent of a system is only interacting with the **mean** (average) of the forces (i.e., field) generated by the actions of all other constituents of the system. You do not take into account specific constituent interactions (e.g., how any particular constituent or constituent group interacts with another constituent or constituent group). ACE puts a strong stress on specific constituent interactions, hence ACE is not a mean-field approach.

Trajectory of a System (p. 7):
A sequence of values describing the values taken on by the state of the system over time. For example, letting x(t) = state of the system at time t, the system trajectory over successive times 0, 1, 2,... is given by the sequence (x(0), x(1), x(2),...)

Equilibrium (usage in modern economic theory):
Any trajectory for an economic system along which: (a) consumers and producers make optimal use of all available information in forming their expectations; (b) consumers and producers formulate their optimal demands and supplies (purchase and sale plans), conditional on their expectations; (c) the expectations of consumers and producers are correct; (d) the optimal plans of consumers and producers are realized.

This conception of an equilibrium can encompass a "rest point" as in the traditional equilibrium definition, because the trajectory in question can be "degenerate" in the sense that the state takes on the same value all along this trajectory.

Example: A rest point x=1 is equivalent to a state trajectory of the degenerate form (x(0), x(1), x(2), ...) = (1, 1, 1...)

However, this conception of an equilibrium can also encompass a changing system state as long as the motion of the system state over time is governed by a probability distribution that is known to the agents.

Example of a system governed by a probability distribution known to all agents: Suppose the current state of a system is 10, and that all agents know that the probability that the next state of the system will be 11 is 1/4 (i.e., one chance out of four), the probability that the next state of the system will be 9 is 3/4 (i.e., three chances out of four), and the probability that the next state of the system will be something other than 11 or 9 is 0. More generally, suppose that all agents know the probability of each possible next state for each possible current state.

Attractor (p. 7):
An equilibrium (point or trajectory) for a system that is at least locally stable, in the sense that the state of the system converges to (is "attracted" to) this equilibrium if the state ever gets "sufficiency close" to this equilibrium. The "sufficiently close" region is called the basin of attraction of the equilibrium.

Example 1:

For system (1), it can be shown that the basin of attraction for the equilibrium point 0 includes the entire "real line" (i.e., the collection of all real numbers). Why? If x(t) currently has a negative value (e.g, -1), then dx(t)/dt takes on a positive value (e.g., 1), implying that x(t) increases (hence moves in the direction of 0) as time t increases. Conversely, if x(t) currently takes on a positive value (say 1), then dx(t)/dt takes on a negative value (e.g., -1), implying that x(t) will decrease (hence move in the direction of 0) as time increases. Thus, given ANY current value for x(t) at any time t, the system state x(t) will move in the direction of 0 as time increases.

Example 2:

For system (2), it can be shown that the basin of attraction for the equilibrium point 0 is limited to all non-positive real numbers. (Why?)

Self-Organizing System (p. 8):
Roughly speaking, a system consisting of distributed interacting constituents (whether inanimate particles or living creatures) is said to be "self-organizing" if it tends to organize itself over time into some kind of persistently maintained global pattern without intervention from any "top down" controller.

Stochastic System (p. 8):
A system whose state dynamics are nondeterministic (uncertain), that is, describable at best in terms of probabilities. For example, given that a system is currently in state x, suppose there are two possible "next" states the system could move to in the next time instant, and the best that anyone currently observing the system can do to predict which of these two possible next states will occur is to assign a probability to each possibility (e.g., a 50-50 chance). Then this system is a stochastic system.

Stochastic Process (p. 8):
A time-dated sequence of random variables, that is, a time-dated sequence of variables whose exact values are probabilistically determined.

Example: For each time t = 1, 2, ..., let X(t) = a variable that takes on the value 1 if a coin-toss at time t lands heads and the value -1 if a coin toss at time t lands tails, where the probability of landing heads is 1/2 and the probability of landing tails is 1/2. Then X(t) is a random variable, and the sequence (X(1), X(2), X(3), ...) is a stochastic process.

NOTE: The sequence of states for a stochastic system constitutes a stochastic process, and this is the sense in which Batten (p. 8) uses the phrase "stochastic process."

Interactive System (p. 11):
A system is weakly interactive if the effects of a local disturbance remain local -- that is, if a local disturbance in one part of the system has no effect on more distant parts of the system. A system is strongly interactive if the effects of a local disturbance do not remain local -- that is, if a local disturbance in one part of the system has effects even on distant parts of the system.

Self-Organized Critical State of a System (pp. 11-12):
See the glossary entry, below, on Per Bak's Sand Pile Model for a discussion of this non-straightforward concept in a more concrete setting.

Phase Transition (p. 11):
In traditional equation-based systems theory, a phase transition refers to any structural change in the equations describing the dynamics of a system. For example, suppose at some time T a system switches from the dynamics described by equation (1) to the dynamics described by equation (2).

However, as will be seen in Part I.B of the course, ACE models are not explicitly equation based. Consequently, the meaning has to be generalized to mean, roughly, any persistent change in the basic nature of the dynamics exhibited by a system. Batten discusses one example: the transition of a sand pile from weakly interactive dynamics to strongly interactive dynamics as more and more grains of sand are added to it.

Homeostasis (p. 12):
Batten states that a system exhibits "homeostasis" if "it is resistant to small perturbations." A bit more carefully stated, a system in a particular state x exhibits homeostasis if the system eventually returns to state x when subjected to small displacements ("perturbations") from x.

Power Law (p. 20):
Roughly, an event is said to behave in accordance with a power law if the frequency of the event is inversely proportional to its magnitude (size, strength, rank,...). The canonical example is earthquakes. The frequency of an earthquake of a particular magnitude (measured on the Richter scale) has been observed to be inversely proportional to this magnitude, implying that smaller earthquakes occur more frequently than larger earthquakes.

More generally, two positively-valued variables C and N are said to satisfy a power law relationship if there exist positive constants K and s such that

(3) N = KC-s

Taking the "natural logarithm" (log) of each side of (3), and letting n=log(N), k=log(K), and c=log(C), it can be shown that one obtains

(4) n = k - sc

Consequently, when graphed on a double logarithmic plot, the power law relation (3) between N and C becomes the linear relation (4) between n=log(N) and c=log(C) with ordinate given by k=log(K) and slope given by -s.

Power laws imply "invariance to scale." For example, suppose N = N(C) denotes the frequency of an earthquake of magnitude C, and that N and C satisfy the power law (3). Then, for any constant r, it follows that the relative frequency of avalanches of magnitudes rC and C, given by N(rC)/N(C), reduces to r raised to the power -s. Consequently, the relative frequency is a function of the relative magnitude, r, but not of the scale of the magnitude, C.

Self Similarity (pp. 22, 26):
Invariance against changes in scale or size, in the sense that small-scale patterns combine to form similar patterns at larger scales. Example: The coastline of the United Kingdom. A foot of coastline tends to have the same rugged pattern as miles of coastline viewed from a plane.

Punctuated Equilibrium (p. 22):
A term associated with paleontologists Nils Eldredge and Stephen Jay Gould, who argued in a 1972 article that evolutionary change is not gradual but rather proceeds in fits and starts. Long periods of stasis (no change, or "equilibrium") are interrupted (punctuated) by sudden brief periods of rapid change. During the period of stasis, selection pressure is conservative, maintaining roughly constant organism designs. During the punctuation period, large shifts in organism designs take place. Critics of Eldredge and Gould (e.g., Daniel Dennett, Darwin's Dangerous Idea, pp. 282-299) argue that Eldredge and Gould do not clearly explain the mechanism by which these rapid design shifts are supposed to occur.

Morphogenesis (pp. 24, 34):
Batten uses this term to mean the occurrence of structural change in a system over time (e.g., due to positive feedback loops, or to a "punctuated equilibrium" event). He envisions it as "a struggle between two or more attractors." In biology, morphogenesis is the process by which a "phenotype" (an actual physical organism) develops through time under the direction of its "genotype" (genetic instructions encoded in its DNA) in the context of a particular environment.

Efficient Markets Hypothesis (p. 25):
Roughly, the hypothesis that current stock share prices efficiently reflect all available information, so that nothing is to be gained from a study of past prices. In particular, given current stock market prices, past stock market prices provide no additional help for the prediction of future prices. An implication of this hypothesis is that technical traders (i.e., traders who attempt to deduce future stock prices by studying patterns in past stock price data) are simply wasting their time.

Fractal (pp. 26-28):
In these pages Batten briefly summarizes evidence discovered by the mathematician Benoit Mandelbrot that various commodity price sequences exhibit "fractal" properties. [Batten takes a more careful look at this evidence later in his book, specifically in Chapter 7.] The rigorous definition of a "fractal" involves advanced mathematics (see the Technical Note, below), so only a rough description will be given here.

Beginning in the 1950s, Mandelbrot conceived and developed a new geometry of nature that describes many of the irregular and fragmented patterns in nature (clouds, mountains, coastlines, bark,...). This geometry is based on a family of shapes he called fractals. The regularities and irregularities of fractals are statistical in nature, and each fractal shape tends to exhibit similar degrees of regularity and irregularity at all scales. Mandelbrot coined the term "fractal" from the Latin adjective fractus, whose corresponding Latin verb frangere means "to break."

Technical Note: In formal mathematical terms, a fractal is a set in Euclidean space whose "Hausdorff-Besicovitch dimension" strictly exceeds its "topological dimension." For a more detailed understanding of fractals, see Benoit B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman and Company, revised edition 1983.

Negative Feedback Loops (p. 29):
Processes that tend to counteract, or cancel out, deviations from a current system state.

Example: Government unemployment benefit programs (i.e., government programs that distribute payments to unemployed workers) constitute a negative feedback loop. In a recession, the income that workers lose due to job loss is partially offset by a rise in unemployment benefits. The reverse occurs during the expansion phase of a business cycle.

Positive Feedback Loops (p. 29):
Processes that tend to amplify deviations from a current system state.

Example: Bank panics. Suppose you have a deposit account at First National Bank. If you see other people hurrying to withdraw their money from First National Bank because (for whatever reason) they fear the bank is going to go under, you might also begin to fear for your money and hurry to withdraw your deposit account funds. If this "chain reaction" continues, lots of people will end up seeking to withdraw their funds from First National Bank, all within a relatively short period of time. The end result will then be the bankruptcy of First National Bank even though, prior to the bank panic, the bank was not actually in any financial difficulty.

Increasing and Diminishing Returns to Scale (p. 31):
A production process exhibits increasing returns to scale if, as output production increases, there is a decrease in average cost (i.e., in the production cost per unit of output produced). A production process exhibits diminishing returns to scale if, as output production increases, the unit cost increases. (See Batten, Figure 1.10, page 32)

Path Dependent Process (p. 33):
Where the process ends up depends on where it started from. In other words, initial conditions matter.

Learning Curve (p. 41):
Batten uses the term learning curve in relation to a particular firm to mean a plot that relates the firm's average cost (i.e., production cost per unit of output produced) to its total produced output. See, for example, the upper plot in his Figure 1.12 (p. 42) which depicts the learning curve for Microsoft's Windows operating system.

NOTE: A more common and neutral name in economics for such a plot is the average cost curve of a firm, plotted with production quantity on the horizontal axis and average cost on the vertical axis (the reverse of Batten's learning curve). The problem with calling such plots "learning curves" is that their shapes may be due more to "economies of scale" effects than to learning effects per se. In the case of Microsoft, the production and release of the first version of Windows (1.0) in 1985 entailed high research and development costs, but subsequent units of Windows 1.0 could then be produced extremely cheaply (implying a strongly declining average cost curve for Windows 1.0). Of course, in actuality, Microsoft has steadily worked to improve successive releases of Windows, which has involved heavy additional research and development expenditures. The exact reason for the strong decline in the price of "the" Windows operating system over time (counting all versions as the "same" product) has been a central issue in the Microsoft Anti-Trust case.

Coevolutionary Multi-Agent System (p. 41):
A system comprising multiple agents in which each agent is changing its mode of behavior over time in reaction to (and possibly also in anticipation of) the modes of behavior expressed by other agents. Consequently, agents are evolving together over time.

Copyright © 2008 Leigh Tesfatsion. All Rights Reserved.