SYNTHETIC ANALYSIS OF COMPLEX SYSTEMS I - THEORIES

Sunny Y. Auyang

 

In the past two or three decades, complexity not only has been a hot research topic but has caught the popular imagination. Terms such as chaos and the butterfly effect become so common they find their way into Hollywood movies. What is complexity? What is the theory of complexity or the science of complexity? I don't think there is such a thing as the theory of complexity, nor even a rigid definition of complexity in the natural sciences. There are many theories trying to address various complex systems. What I try to do is to extract the general ideas that are implicit in the theories, and more generally in the way that we face and think about complicated situations.

1. "Beyond Reductionism"

In April, 1999, the journal Science published a special section on complex systems. It includes viewpoints from physics, chemistry, molecular biology, ecology, neuroscience, earth science, meteorology, economics. However, there is no contribution from what the editors, Gallagher and Appenzeller, call "the small, elite group of scientists whose ideas provide the theoretical underpinning for much of what is reported here." I wonder who these elites are. Especially, what is the substantive theory that can underpin so many sciences? Can it be very much more definite than the idea Gallagher and Appenzeller put in their title "Beyond Reductionism?" They do not try to define complexity, but invite each author to explain what "complex" means. I have copied some answers in the handout. They make some common points. Most complexity we see around us can be traced to a similar source, the structures generated by the combination of many interacting constituents. The constituents themselves can be rather simple, and the relation between any two may be simple too. But because there are so many constituents in a large system, their multiple relations generate a relational network that can be highly complex, variegated, and surprising.

Most interestingly, although the countless relations pulling and tucking at all directions are very complicated on the small scale, sometimes the network as a whole exhibits large-scale patterns that can be conceptualized rather simply, just as crazy swirls of colors crystallize into a meaningful picture when we step back from the wall and take a broader view of a mural. These salient patterns are the emergent properties of the systems. They are not found in individual constituents, because they belong mainly to the structures arising from the relations among the constituents. The rigidity of solids and turbulence of fluids emerge from the intangible quantum phases of elementary particles; life emerges from inanimate matter; consciousness emerges from neural organization; social institutions emerge from individual actions. Without emergent properties, the world would be dull indeed, but then we would not be there to be bored.

One cannot see the patterns of a mural with his nose on the wall; he must step back. The ability to adopt various intellectual focuses and perspectives suitable for various topics is essential to the study of complexity. Hence a striking feature of the sciences of complexity is the diversity of theoretical perspectives, models, and levels of description. Scientists use different concepts and theories to describe large composite systems and their constituents, e.g., thermodynamics describes macroscopic systems and mechanics describes their molecular constituents. The different theories are like the telescopes and microscopes we use to see distant and microscopic objects. They are not reducible; i.e., concepts for the systems as wholes are not redundant and dispensable in favor of concepts for their constituents alone. Irreducibility does not imply incommensurability or unconnectability. Theories for complex systems and theories for their constituents can be connected, but the connection is far more complicated than the simplistic prescription of reductionism. For example, to connect thermodynamics to mechanics requires a whole new theory, statistical mechanics. Statistical mechanics does not dispense with thermodynamic concepts; instead, it enlists them to join force with concepts in mechanics to explain the complexity of composition. This nonreductive connection between different descriptive levels is synthetic analysis.

Synthesis and analysis have been venerable methods in science since the ancient Greeks. They were said to be Galileo's methods, which were adopted by Hobbes. Descartes united synthesis and analysis into a single principle in his Rules of Thought, and Newton made a similar formulation in the Optiks. Synthetic analysis first gains a synoptic view of the complex system as a whole and discerns its gross features. To explain these features, it analyzes the system into weakly interacting parts and modules, which are studied independently and thoroughly. Finally it synthesizes the results of the parts to find the solution to the original complex problem. The process usually involves many approximations, and successive approximations improve and refine the solution. Synthetic analysis aims to answer questions about the composite system, therefore it never totally loses sight of the whole, even when it looks at the parts. In this it is opposite to reductionism, which is solely concerned with the parts. Here we will consider examples of synthetic analysis from economics and condensed matter physics, which offer comprehensive theoretical frameworks. In a later talk we will consider the case of molecular biology and genetics, where synthetic analysis is carried out with little theories.

2. Two classes of complex phenomena

Many phenomena are too complex for theorization. Among those that have yielded to comprehensive theoretical representations, two classes stand out: nonlinear dynamics and mass phenomena. Nonlinear dynamical processes are deterministic, their evolution is governed by a dynamical rule that determines a unique successor to each step. The logistic equation xn+1 = axn (1 - xn) is an example of dynamic rules; the parameter a can take on any value. You pick an initial value for the x0, pluck it in the right-hand side, and calculate the value of x1. Then you put x1 in the right-hand side, repeat, to generate the dynamic process. The calculation is trivial. However, such simple dynamic rules can generate chaos, instability, bifurcation, strange attractors, and all the buzzwords you find in the popular complexity literature. When a = 4, for instance, the logistic process is totally chaotic and unpredictable in the long run.

Chaos means that minute differences in the initial conditions are amplified exponentially. This has great consequences. Most important physical quantities are continuous and represented by real numbers. In such cases exact initial conditions are not possible, because the exact specification of a real number requires infinite information. Any initial condition you give is actually an infinite number of conditions clustered within a certain margin of error. For regular systems, the processes ensuing from the conditions tend to bundle together, so that the dynamic equation can predict their evolutionary courses to within a similar margin of error. For chaotic systems, the processes may bundle together for a short while, but will eventually diverge and diverge big time. Therefore although the dynamic equation determines each step uniquely, it loses its predictive power over the long run, because the answers spread all over the place. Chaos and long- term unpredictability are emergent properties of deterministic processes.

Mass phenomena are the behaviors of many-body systems, large systems made up of a great many interacting constituents belonging to a few types and interacting by a few types of relations: a solid made up of one or two kinds of atoms, a national economy of consumers and producers.

Nonlinear dynamics and mass phenomena cover a very wide range of topics in many sciences. Nevertheless, they share some general commonality. The systems are large; many-body systems consist millions or zillions of constituents; and novel features such as chaos show up only in the long run, when a process has accumulated many steps. The steps and constituents may be monotonous and predictable, but the systems they constitute may be highly volatile and unstable. Chaos appears in deterministic processes. We call a catastrophe a meltdown, as in the recent meltdown of the Asian economies. Such catastrophes occur whenever you put ice cubes in your drink. In phase transition such as melting, the entire structure of a system changes radically; that's instability. Chaos and meltdown are examples of emergent properties. Theoretically, the emergence of such large-scale properties is apparent only in a synoptic view that grasps the systems as wholes as well as their constituents. The synoptic view is the conceptual framework of synthetic analysis.

3. Synthetic framework for analysis

Dynamics is as old as Newton. However, classical dynamics does not have the conceptual means to represent phenomena such as chaos or bifurcation. What makes modern dynamics more powerful is the global geometric view introduced by Henri Poincare at the end of the last century.

Classical dynamics is limited to individual processes; it is satisfied to find the behavior of a system given a particular initial condition. For instance, given an initial displacement, it describes how an undamped and undriven pendulum swings back and forth.

In contrast, modern dynamics introduces an expansive conceptual framework that includes processes for all possible initial conditions, and if a system depends on a set of parameters, its behaviors for all values of the parameters. The framework summarizes all these possible behaviors by a portrait in the system's state-space. The state of a dynamic system is a summary of all its properties at one moment of time, and a process is a sequence of states whose succession is governed by the dynamic equation. The state space is the collection of all the states that a system can possibly achieve. It has become one of the most important concepts in all mathematical sciences.

A state of a pendulum is specified by two values, its angular displacement and its angular momentum. The two variables span the state space of the pendulum. As the pendulum swings, its state traces out an ellipse in the state space; its displacement reaches a certain maximum when is momentum vanishes, and its momentum is maximum when it is not displaced from the vertical. Another process starting from an initial condition with higher energy traces a larger ellipse. When the energy is so great the pendulum goes over the top, its behavior changes radically; it no longer swings but rotates about its pivot, and its momentum never vanishes. The state space portrait of the pendulum shows two distinct types of motion, swing and rotation, separated at the critical energy where the pendulum precariously stops at the top. The separation between the two types of motion is called a separatrix.

For dissipative systems, the separatrix would separate two basins of attraction, and the ellipse would be an attractor. If the system is chaotic, the attractor is strange. A complicated system can have many attractors, and its behavior changes dramatically when it shifts from one attractor to another; such change is called bifurcation. If a system depends on a parameter, its whole system of attractors can change as the parameters adopt different values.

Generally, the conceptual framework of modern dynamics encompasses all possible processes for a type of system, defines properties of the processes as wholes, and systematically classifies the processes according to their properties. On the other hand, it does not lose sight of individual states, so that we can analyze the processes if we please. Such synthetic analytic conceptual frameworks are not limited to modern dynamics. They are also the secrets to the probability calculus.

The probability calculus has found application in so many areas historians talk about a "probability revolution." What in the probability calculus that makes it so powerful? No, not chance; the notion of chance is not even defined in the calculus. What makes the probability calculus powerful is the synthetic framework for representing a large composite system as a whole that is susceptible to analysis, for example, to treat a sequence of coin tosses as a unit, and represent all possible configurations of the sequence in a state space similar to this. Instead of considering the time variation as in dynamics, the probability calculus introduces a systematic way to partition the state space into chunks and calculate the relative magnitudes of the chunks. The relative magnitudes are defined as probabilities. Synthetic conceptual frameworks are powerful, but they are also difficult. They are made possible only by the higher abstraction mathematics underwent in the nineteenth century. Why do we need such heavy conceptual investment in studying complex systems? To answer this question, it is helpful to look at two formal definitions of complexity.

4. Formal Definitions of Complexity and the Combinatorial Explosion

There is no precise definition of complexity and degree of complexity in the natural sciences. I use "complex" and "complexity" intuitively to describe self-organized systems that have many components and many characteristic aspects, exhibit many structures in various scales, undergo many processes in various rates, and have the capabilities to change abruptly and adapt to external environments. Nevertheless, there are two definitions of complexity in the information and computation sciences that can help us to appreciate nonreductive strategy for studying complex systems.

The idea of complexity can be quantified in terms of information, understood as the specification of one case among a set of possibilities. The basic unit of information is the bit. One bit of information specifies the choice between two equally probable alternatives, for instance whether a pixel is black or white. Now consider binary sequences in which each digit has only two possibilities, 0 or 1. A sequence with n digits carries n bits of information. The information-content complexity of a specific sequence is measured in terms of the length in bits of the smallest program capable of specifying it completely to a computer. If the program can say of a n-digit sequence, "1, n times" or "0011, n/4 times," then the bits it requires are much less than n if n is large. Such sequences with regular patterns have low complexity, for their information contents can be compressed into the short programs that specify them. Maximum complexity occurs in sequences that are random or without patterns whatsoever. To specify a random sequence, the computer program must repeat the sequence, so that it requires the same amount of information as the sequence itself carries. The impossibility to squeeze the information content of a sequence into a more compact form manifests the sequence's high complexity.

The information content complexity belongs to the definite description of a specific system, thus it is not useful in science because science is usually not so much interested in specific systems than in classes of system that satisfy certain general criteria. It often happens that totally random systems, which have highest information content complexity, exhibit other types of regularity than can be characterized rather simply if we are willing to adopt some other criteria of classification, e.g., use the law of large numbers. We have the probability calculus for such systems, and usually systems susceptible to the calculus are regarded as not that complex. Here we have the first instance of the theme of this talk; the flexibility to choose different criteria is paramount in scientific research.

The second definition of complexity describes not systems but problems. Suppose we have formulated a problem in a way that can be solved by algorithms or step-by-step procedures executable by computers, and now want to find the most efficient algorithm to solve it. We classify problems according to their "size"; if a problem has n parameters, then the size of the problem is proportional to n. We classify algorithms according to their computation time which, given a computer, translates into the number of steps an algorithm requires to find the worse-case solution to a problem with a particular size. The computation-time complexity of a problem is expressed by how the computation time of its most efficient algorithm varies with its size. Two rough degrees of complexity are distinguished: tractable and intractable. A problem is tractable if it has polynomial-time algorithms, whose computation times vary as the problem size raised to some power, for instance n2 for a size-n problem. It is intractable if it has only exponential- time algorithms, whose computation times vary exponentially with the problem size, for instance 2n. Exponential-time problems are deemed intractable because for sizable n, the amount of computation time they require exceeds any practical limit.

As an example, consider the problem of finding a specific sequence of binary digits among all its possible configurations. The size of the problem is the length of the sequence of the number of digits it contains, n. A sequence with 4 digits has 16 possible configurations; a sequence with 40 digits has a trillion configurations. Generally, as the number of digits in the sequence n increases linearly, the number of possible configurations of the sequences increases exponentially. This is called the combinatorial explosion of composition. If, given a certain criterion, we have to find a particular sequence by searching through all the possibilities, then the combinatorial explosion makes the problem intractably complex.

Brute-force search is a venerable strategy in artificial intelligence (AI), and the combinatorial explosion explains why the progress of AI is rather slow. Take chess for example. Chess, a finite game with rigid rules, is conducive to the method of searching through all possible configurations to find the optimal move. Shortly after the Soviets launched Sputnik in 1957, Simon predicted that a computer would be the world chess champion within ten years. He was wrong by thirty years. In the interim, computer technology developed so dramatically that the price of computing dropped by half every two to three years. Economists estimate that if the rest of the economy had progressed as rapidly, a Cadillac would now cost $4.98. It would almost be affordable to vacation on the moon. Despite the unexpected advancement in hardware technology, computers' chess victory came so late because of the combinatorial explosion. A chess game is a process made up of constituent moves, and the number of its possible configurations increases exponentially as one thinks more steps ahead. The combinatorial explosion blunts the raw power of the computer to search through the possibilities. This is why despite its victory in chess, the computer is still a novice in the board game go; there are simply too many possible go configurations.

Some people argue that chess and other AI problems are more difficult than physical science because there are more possible chess configurations than atoms in the universe. The argument is wrong because it compares apples to oranges, or the number of possible configurations to the number of constituents. They should have made the correct comparison between the number of chess pieces, 32, to the number of atoms, or between the possible chess configurations to the uncountably infinite possible configurations that the atoms in the universe can make up. The comparison would show that physical science would have gone nowhere if scientists were as one track minded as chess machines. Scientists have managed to understand the universe because they do not rely on brute force enumeration of atomic configurations but can adopt different intellectual perspectives. They are like human chess players. Human players do search, but unlike chess machines, they also recognize strategic patterns, discern good moves, and concentrate on them. Similarly, scientists are not bogged down in microscopic details. To solve complex problems regarding complex systems, they adopt different perspectives and different strategies. Their versatility in selecting perspectives and levels of description is the hall mark of nonreductionism.

5. Holism, reductionism, synthetic analysis

Ontologically, we all agree that a complex system S is solely composed of the constituents C and their interrelations. There is no extra mysterious substance. Because large systems and their constituents are on two organization levels, their properties can be quite different, as we've seen in the preceding examples. To recognize the system and its constituents and talk about them, we must have already used some concepts. These concepts may be intuitive, they may constitute what philosophers call folk theories, or they may constitute scientific theories. Historically, the system theory and constituent theory are usually developed independently, and there is no guarantee that their concepts will mesh. What is the general nature of the relation between ST and CT?

There are three common attitudes. The first asserts that ST and CT each generalizes in its own way, and theoretical connection between them is impossible. Without theoretical connection, the notion of composition becomes obscures, and it is meaningless to talk of constituents. We are left with two distinct types of systems described by to disjoint theories. The result is dualism.

While dualism opts for isolation, reductionism opts for conquest. It asserts that system concepts and theories are in principle dispensable, and their territories annexed by constituent theories. A single representation in terms of CT suffices. It is a purely bottom up approach, where all the properties of the system are nothing but the mathematical consequences of the constituent theories. The deductive and constructive approach is fruitful for small and simple systems, but does not work for large and complex systems, because the combinatorial explosion generates overwhelming details and complexity, so that a bottom up approach will quickly get lost among all the trees and undergrowth. In such cases the practical approach is to first get an aerial view of the forest as a whole, so that one will not lost his way when he eventually descents among the tress. The areal view, which reductionism spurns, is crucial to synthetic analysis. To gain an areal view, you need a helicopter or an airplane, and there's where the synthetic conceptual frameworks of probability calculus and dynamics come in. They accommodate bottom-up deduction, but guide it by a top down view.

If you examine how the sciences of large composite systems work, you'll find that they do not put together constituents but take apart the systems they aim to understand. They do not take the parts for granted but analyze the whole to find the parts appropriate for the mechanism underlying specific properties of the whole. In short, their general theoretical approach is not constructive but analytic.

Synthetic analysis encompasses two perspectives, looking at the system on its own level and looking at it on the level of its constituents. It includes two kinds of explanations. Macroexplanations develop scientific concepts and theories for composite systems without mentioning their constituents. They delineate system properties, represent them precisely, and find the causal regularities and laws among them. Macroexplanations constitute the primary explanatory level of systems, and they enjoy a high degree of autonomy. Hydrodynamics and thermodynamics can operate on their own. However, for a full understanding of the systems including their composition, macroexplanations are necessary but not sufficient. For this we also need microexplanations that connect the properties delineated in macroexplanations to the properties of the constituents. Microexplanation depends on macroexplanation, which first set out what needs microexplanation. Thus thermodynamics and hydrodynamics, which provide macroexplanations, matured before the development of statistical mechanics, which provides microexplanations.

Microexplanations use mathematical deduction as much as possible, but it also depends on ample realistic approximations. They usually introduce their own postulates and assumptions that are not found in CT. For example, statistical mechanics has its own postulate of equal weight. Such extra postulates ensure the irreducibility of ST. Microexplanations use both ST and CT essentially. They explain system properties without explaining them away as reductionism does. They not only find the micromechanisms underlying various macroscopic properties, they also explain how the large structures of the systems constrain the behaviors of individual constituents. They look at the whole causal structure spanning the system and the constituents from all angles, upward causation, downward causation, to get a comprehensive grasp of the complexity of composition.

In short, the actual scientific approach to complex systems does not reduce the theoretical framework but expands it to accommodate more perspectives, more postulates, more theoretical tools to filter out irrelevant microscopic details and define novel emergent macroscopic properties.

Unlike reductionism that aspires to frame a theory of everything, synthetic analysis takes the combinatorial explosion seriously. It realizes that to encompass all the diversity in a single theory is impossible. Hence it is contented to various aspects of complex systems one at a time. Thus the sciences employing synthetic analysis are often fragmented into a host of models. The fragmentation does not mean that a science does not have a unifying theme; usually it does. It means that the theme is not as firmly binding as the basic laws in elementary particle physics. Instead, it is articulated in general terms, more like a broad constitution than a code of law. Thus the models in a science are like states of USA, each with their own legislation and laws but agrees to the general principles of a constitution that does not make any state the overlord as reductionism entails.

6. Two Theoretical Sciences of Complex Systems

The inaugurating workshop for the Santa Fe Institute for Complexity Research was entitled "The Economy as an Evolving Complex System." It was organized by an economist and two condensed matter physicists. The mix should not be surprising, for economics and condensed matter physics share something in common. They are both sciences of mass phenomena; they both study many-body systems.

I think the name "many-body system" comes from physics, but it is abstract enough to be extrapolated to other areas. A many-body system is a system with many constituents belonging to a few types and interacting via a few types of relation. The typicality allows scientists to generalize and develop theories. Many-body systems are ubiquitous in the physical, ecological, political, and socioeconomic spheres. The modern individualistic and egalitarian society is a many-body system. Not surprisingly many-body theories are found in many sciences, including several branches of physics. Even before the sciences of mass phenomena, many-body systems had attracted attention; Hobbes's Leviathan and Leibniz's monadology both tried to frame theories for them.

A familiar many-body system is a solid, say a piece of gold. The atoms in the solid may decompose into ions and electrons. Neglecting internal structures, they have relatively simple properties, and the electromagnetic interaction among them is also well known. The solid has macroscopic properties such as strength, ductility, electric and thermal conductivities, thermal expansion coefficients.

The systems studied in theoretical economics are decentralized economies made up of millions of consumers and producers. Centrally planned economies are more suitable for structuralism and functionalism than many-body theories, for the central planner has a controlling status, violating the requirement that the constituents of a many-body system are almost similar in status. Real life consumers and producers are more complicated than electrons and ions. In economic theories, however, they are grossly simplified and represented precisely by variables such as the consumer's taste and budget and the producer's plan, technology, and profit. Some theories also include partial information and knowledge. The consumers and producers interact via trade, contract, and other commercial activities. Together they constitute a free market economy with a certain resource allocation, national product, inflation, unemployment, and other macroeconomic properties familiar from economic news.

A common reaction to comparisons between physics and other sciences is that peoples and organisms are not all the same, they vary; and people change as they interact with each other. True, but electrons too vary and electronic property too changes radically in different situations. The theoretical sciences, be they physical, biological, or social, generally consider types of entities, whose properties are represented by variables. The consumer taste, production plan, energy, and momentum are all variables, i.e. they can take on different values for different individuals, and the specific values are the properties of specific individuals. Of course, the values do not change arbitrarily but systematically according to some rules, and the rules of the variables specify the typical properties of different types of individuals, consumers versus producers, electrons versus ions.

In a many-body problem, we assume that we know the typical properties and relations of the constituents. We know how electrons typically move and repel one another; how consumers typically manage their budgets and trade with one another. In physics at least, the typical properties are extrapolated from the studies of small systems. They are usually well known and can be written down quite easily in terms of variables. We do not know the specific properties of individual constituents in the system; specific properties such as how particular consumers fare in an economy belong to the solution of the many-body problem. The central aim of many body problems is the microexplanation that relates typical macroscopic properties of the systems to the typical properties of the constituents. As it turns out, this is a very difficult problem, and often demands total reformulation of the problem, i.e., cast the typical properties of the constituents in different forms.

7. Self-consistent independent-individual models

Consider a system consisting a single type of constituent with typical property P and a single type of binary relation R. Although each binary relation connects only two individuals, each individual can engage in as many binary relations as there are partners. Generally, each constituent i in the system has property Pi and relation Rij to every other constituent j. This forms a complicated relational network. Suppose we change a single constituent. The effect of its change is passed on to all its relational partners, which change accordingly, and in turn pass their effect to their relational partners. To track all the changes would be an intractable problem.

Relations make the problem difficult. The crudest approximations simply throw them away, but this usually does not work. For relations cement the constituents, ensure the integrity of the composite system, and make the whole more than the sum of its parts. To take account of relations without being stuck in the relational network like a butterfly trapped to a spider's web, scientists reformulate the many body problem.

In a widely used strategy, scientists analyze the system to find new constituents whose behaviors automatically harmonize with each other so that they naturally fit together without explicit relations. In doing so, they divide the effects of the original relations into three groups. The first group of relational effects is absorbed into the situated properties for the newly defined constituents. The second group is fused into a common situation to which the new constituents respond. Whatever relational effect that is not accounted in these two ways is neglected. These steps transform a system of interacting constituents into a more tractable system of noninteracting constituents with situated properties responding independently to a common situation jointly created by all.

In the reformulation of the problem, the original constituents are replaced by new entities with typical situated property P that is custom made for a situation S. The constituents respond only to the situation and not to each other. The situation in turn is created by the aggregate behaviors of the situated entities. In the new formulation, the troublesome double indices in the relation are eliminated. There is only the single index indicating individual constituents. Once we find the typical situated property P* and the distribution of constituents having various values of the property, simple statistical methods enable us to sum them to obtain system properties. The result is the independent individual model. It is akin to Leibniz's monadology featuring a set of monads with no window to communicate with each other but with properties that automatically cohere into a harmony. The difference is that here the harmony, the situation S, is not pre-established by God but falls out self consistently with the monadic properties P*.

Independent particle approximations flourish in many branches of physics. They have a variety of names, one of which is self-consistent field theory, for the situation S takes the form of an effective field that is determined self consistently with particle properties. Another common name is the Hartree-Fock approximation for quantum mechanical systems generally.

The self-consistent field theory has a close analog in microeconomics, the general equilibrium theory of perfectly competitive market, commonly referred to as the Arrow Debreu model. It has been the corner stone of microeconomics for a long time, and many economists still take it to be so. In real life, people bargain and trade with each other, and corporations compete with each other by price wars and things like that. However, these commercial interactions are nowhere to be found in perfectly competitive market models. Instead there is the market with a set of uniform commodity prices, which constitute the situation in my terminology. The constituents of the economy, consumers and producers, have their properties defined in terms of the prices. In economics, the property of an individual is literally what he possesses, and the prices determine the property by constraining the consumer's budget and the producer's profit. The consumers and producers do not bother with each other, all they see are the prices, according to which they decide what they want to buy or produce. They respond only to the market and to no one else. Now the commodity prices are not proclaimed by a central planner, they are determined by the equilibration of aggregate demand and supply, that is, by the properties of the consumers and producers. In this way individual properties and the common situation are determined self consistently. And the economy is a Leibnizian world with windowless monads in a market established harmony.

The independent individual approximation has its limitations, but it is powerful and versatile. The Hartree Fock approximation is still a working horse in physics; in many complex cases, it is the only manageable approximation, although physicists know that it is not satisfactory. I think the independent-individual approximation will spread to other social sciences. Families and communities are collective unities, as they break up and people are individually drawn in the impersonal media of televisions and the internet, our society is becoming more and more suitable for independent-individual models. To interpret the models correctly, we must remember that the properties of the rugged individuals have already internalized much social interaction, and that their situation is not externally imposed but endogenously determined.

8. Models of many-body systems

The perfectly competitive market theory in economics and the self consistent field theory in physics belong only to the simplest class of models. There are other classes of models for systems with more textures and more tightly integrated constituents. Going from independent individuals through collective phenomena to emergent properties, the systems become more and more tightly integrated, and their theoretical treatments become more difficult. In both sciences, independent individual models matured first. The Hartree-Fock approximation in physics was introduced in the late 1920s, shortly after the advent of quantum mechanics. Although phase transitions are familiar, their microexplanations did not take off until 1970, with the introduction of the renormalization group. A similar process occurred in economics. The Arrow Debreu model in microeconomics was developed in the early 1950s. Although Von Neumann and Morgenstein introduced game theory into economics in 1944, its application to information economics and industrial organization mushroomed only in the last twenty years. Emergent properties such as endogenous growth and structural unemployment are still at the frontier of research. The three waves are not merely academic fashions. The models address different type of objective phenomenon, and their successive appearance reflects how science proceeds step by step to confront more and more difficult problems.

I cannot go into these models, but will use phase transition as an example to sum up the major points discussed earlier.

9. Synthetic analysis of phase transition

The transformations of solid into liquid, and liquid into gas are not the only phase transitions. Another example is the transformation of iron from a paramagnetic phase to a ferromagnetic phase, where it becomes a bar magnet. A third example is the structural transformation in binary alloys such as brass. Phase transitions occur only in large systems; a few H2O molecules constitute neither solid nor liquid, not to mention transformation.

Melting and evaporation are familiar, and intuitively we know they must involve some radical rearrangement of atoms. However, physicists did not write down the quantum mechanics equation for a bunch of H2O molecules and try to deduce phase transition; brute-force reductionism does not work. The intuitive notion of phase transition is too vague to give hints about what one should look for in the jungle of combinatorial explosion. To understand phase transition, physicists started from macroexplanations in thermodynamics. They systematically study the macroscopic behaviors of all kinds of phase transition, and introduce concepts such as the order parameter and broken symmetry to represent the causal regularities in the synoptic view. They discovered that as the systems approach their phase transition temperatures, their thermodynamic variables change in a similar way that can be represented by certain parameters called the critical exponents. Furthermore, the critical exponents of different thermodynamic variables are related by simple laws called scaling laws. Most interestingly, the critical exponents and scaling laws are universal. Systems as widely different as liquid-gas or paramagnetic-ferromagnetic transitions share the same exponents and laws. Never mind about the technical details, just notice the contribution of macroexplanations. They make the notion of phase transition precise by bringing out the important features that call for microexplanation: critical exponents and scaling laws. Furthermore, they offer a clue about what to look for: universality implies that the details of microscopic mechanisms are irrelevant on the large scale. Why?

Microexplanations that answer the questions posed by macroexplanations appeared in the early 70s with the renormalization group. They explain how the peculiarities of the microscopic mechanisms are screened out as we systematically proceed to more and more coarse grained views.

In sum, to understand a class of familiar phenomena, a macroscopic science was first developed to represent clearly their causal regularities, then a microexplanation was developed to find the underlying mechanism. The irrelevancy of micro details and the universality of macro laws justify the autonomy of macroexplanations. On the other hand, the microexplanations cement the federal unity of two descriptive levels. In phase transition, we have a classic example of synthetic analysis, of how theoretical science tackles with complex systems.