Empirical Validation and Verification
of Agent-Based Models
- Last Updated: 13 May 2017
- Site Maintained By:
- Leigh Tesfatsion
- Professor of Economics and Mathematics
- Iowa State University
- Ames, Iowa 50011-1070
tesfatsi AT iastate.edu
Online ACE Course (Self-Study eBook)
- Table of Contents:
- How to undertake empirical validation for an agent-based model; that is, how to ensure that an agent-based model is consistent with empirical data?
- Consider the following famous quote from G.E.P. Box (1987, p. 424): "Essentially, all models are wrong, but some are useful." This quote leaves open the following critical ambiguity: useful for what? Must the specific intended use of a model be known before meaningful empirical validation can proceed?
Do the following four validation aspects, considered as a whole, provide a complete and comprehensive definition of empirical validation? Should agent-based modelers strive to achieve all four aspects?
- Input Validation: Are the exogenous inputs for the model (e.g., functional forms, random shock realizations, data-based parameter estimates, and/or parameter values imported from other studies) empirically meaningful and appropriate for the purpose at hand?
- Process Validation: How well do the physical, biological, institutional, and social processes represented within the model reflect real-world aspects important for the purpose at hand? Are all process specifications consistent with essential scaffolding constraints, such as physical laws, stock-flow relationships, and accounting identities?
- Descriptive Output Validation: How well are model-generated outputs able to capture the salient features of the sample data used for model identification? (in-sample fitting)
- Predictive Output Validation: How well are model-generated outputs able to forecast distributions, or distribution moments, for sample data withheld from model identification or for new data acquired at a later time? (out-of-sample forecasting)
- How can agent-based researchers provide summary reports of empirical validation results to other researchers and to intended model users in an accurate, compelling, and clear manner? What to do, in particular,
when outcome distributions rather than point predictions are the main findings of a study?
- How can agent-based researchers ensure robustness of empirical validation findings, i.e., how can they ensure that their model outcomes reflect persistent aspects of real-world systems under study rather than an overfitting of model parameters to temporary aspects of these systems?
- How can agent-based researchers ensure the accumulation of empirically supported findings?
- How to undertake verification for an agent-based model in computational form; that is, how to ensure that computations are being carried out in the way the modeler intends?
- Some agent-based researchers are engaging in iterative participatory modeling (IPM) efforts in which researchers join with stakeholders in a repeated looping through a four-stage modeling process: field study and data analysis; role-playing games; model development and implementation; and computational experiments. How might IPM contribute to the empirical validation of agent-based models?
Introductory Materials on Empirical Validation
- Ken Binmore and Avner Shaked, "Experimental Economics: Where Next?"
Journal of Economic Behavior and Organization 73(1), January 2010, 87-100. See also the
reply by Fehr and Schmidt (139KB)
reply by Eckels and Gintis (135KB)
- Abstract: Binmore and Shaked provide a controversial, scathing review of some well known human-subject experimental economics work by Ernst Fehr, Klaus Schmidt, and their collaborators. They conclude that experimental economists need to eliminate practices "that would be considered unscientific in other disciplines." Since researchers using agent-based modeling tools share many issues in common with human-subject experimenters (e.g., the need to construct rigorous informative experimental designs), it is strongly recommended that they carefully consider both the criticisms raised in this article and the rebuttal to these criticisms given by Fehr/Schmidt and Eckels/Gintis.
George E.P. Box and Norman R. Draper, Empirical Model Building and Response Surfaces, 1987, New York: John Wiley & Sons.
The source (page 424) of Box's famous quote in the following specific form: "Essentially, all models are wrong, but some are useful." Box made many similar pronouncements in earlier work.
- Kathleen M. Carley, "Validating Computational Models"
Working Paper, Carnegie Mellon University, September 1996.
- Abstract: This paper provides many theoretical and practical insights regarding the empirical validation of computational models for social and organizational systems.
- Catherine C. Eckel and J. Barkley Rosser, Jr. (Guest Editors), "Issues in the Methodology of Experimental Economics", Journal of Economic Behavior and Organization (JEBO)
Special Issue ,
73(1), January 2010.
"Agent-Based Computational Models and Generative Social Science",
Vol. 4/No. 5, May/June 1999, 41-60. The published article is available from
A ppt presentation of this paper by J. Epstein is available
- Abstract: The author argues that the agent-based computational model is a new tool for empirical research. The scientific enterprise is, first and foremost, explanatory, and the central contribution of agent-based modeling is the facilitation of generative explanation. To explain an observed macroscopic social regularity, one must show how a population of cognitively plausible agents, interacting under plausible rules, could actually arrive at the pattern on time scales of interest. In short, "if you didn't grow it, you didn't explain it."
- Giorgio Fagiolo, Christopher Birchenhall, and Paul Windrum (Eds.), Special Issue on "Empirical Validation in Agent-Based Models", Computational Economics,Volume 30, Number 3, 2007.
- Abstract: The papers included in this special issue explore the empirical validation of agent-based models (ABMs). They demonstrate that this issue is far from settled and features many difficult but exciting
challenges. Some are rooted in well-known and still-open problems of scientific methodology that concern empirical validation regardless of the particular family of models under scrutiny. Others are strongly related to aspects particular to ABMs.
- Richard Feynman, Cargo Cult Science
From a Caltech commencement address given in 1974. Published in R. Feynman, Surely You're Joking, Mr. Feynman!", W. W. Norton & Company, 1985.
- Abstract: "...underneath all the merriment simmers a running commentary on what constitutes authentic knowledge: learning by understanding, not by rote; refusal to give up on seemingly insoluble problems; and total disrespect for fancy ideas that have no grounding in the real world."
- Sergio Mario Focardi, Is Economics an Empirical Science? If Not, Can It Become One?
Frontiers in Applied Mathematics and Statistics 1:7, 2015 (electronic), doi:10.3389/fams.2015.00007
- Abstract: "Today’s mainstream economics, embodied in Dynamic Stochastic General Equilibrium (DSGE) models, cannot be considered an empirical science in the modern
sense of the term: it is not based on empirical data, is not descriptive of the real-world economy, and has little forecasting power. In this paper, I begin with a review of the
weaknesses of neoclassical economic theory and argue for a truly scientific theory based on data, the sine qua non of bringing economics into the realm of an empirical science.
But I suggest that, before embarking on this endeavor, we first need to analyze the epistemological problems of economics to understand what research questions we can reasonably ask our theory to address. I then discuss new approaches which hold the
promise of bringing economics closer to being an empirical science. Among the approaches discussed are the study of economies as complex systems, econometrics and econophysics, artificial economics made up of multiple interacting agents as well as attempts being made inside present mainstream theory to more closely align the theory with the real world."
Marco Janssen and Elinor Ostrom organized a workshop on
Empirically-Based Agent-Based Models
at Indiana University (Bloomington, IN), held June 2-4, 2005.
- Abstract: The purpose of this workshop was to bring together a group of researchers who are using agent-based models (ABMs) to study empirically a variety of social-ecological and economic systems. A major goal was to derive a state-of-the-art synthesis of the various approaches currently being used to test ABMs. Another goal of the workshop was to examine the use of experimental and survey techniques for validating the assumptions made by ABM researchers regarding the internal learning and decision-making behavior of computationally modeled agents. Presentation slides for all workshop presentations can be freely accessed at the above workshop site.
See, in particular, the guest editorial of the same title by Marco A. Janssen and Elinor Ostrom subsequently published in Ecology and Society 11(2), 2006, also available at the above site.
Katerina Juselius, Special Issue on Using Econometrics for Assessing Economic Models - An Introduction"
Economics: The Open-Access Open-Assessment E-Journal, Vol. 3, 2009-28, June 18, 2009.
"Two methodological approaches to empirical economics which are labelled ‘theory first’ versus ‘reality first’ are introduced, building the background for the discussion of the individual contributions to this special issue."
- Charles M. Macal, "Special Issue on Using Econometrics for Assessing Economic Models - An Introduction"
Keynote Address, MABS 2013, 14th International Workshop on Multi-Agent-Based Simulation, May 7, 2013.
- Abstract: This tutorial addresses three questions: (1) How should we think about model validation? (2) How should we do model validation? (3) What should we do about model validation?
Robert E. Marks,
"Validating Simulation Models: A General Framework and Four Applied Examples"
Computational Economics 30(3), 2007, 265-290.
This paper provides a framework for discussing the empirical validation of
simulation models of market phenomena, in particular of agent-based computational
economics models. Such validation is difficult, perhaps because of their complexity;
moreover, simulations can prove existence, but not in general necessity. The paper
highlights the Energy Modeling Forum’s benchmarking studies as an exemplar for
simulators. A market of competing coffee brands is used to discuss the purposes and
practices of simulation, including explanation. The paper discusses measures of
complexity, and derives the functional complexity of an implementation of Schelling’s
segregation model. Finally, the paper discusses how courts might be convinced to trust
simulation results, especially in merger policy.
- Robert E. Marks, "Analysis and Synthesis: Multi-Agent Systems in the Social Sciences
Knowledge Engineering Review, 27(1), 2012.
Although they flow from a common source, the uses of multi-agent systems (or "agent-based
computational systems") vary between the social sciences and computer science. The
distinction can be broadly summarised as analysis versus synthesis, or explanation versus design.
I compare and contrast these uses, and discuss sufficiency and necessity in simulations in general
and in multi-agent systems in particular, with a computer science audience in mind.
- Robert E. Marks, "Tutorial: Validating Simulation Models, and Multi-Agent Systems in the Social Sciences"
CIFEr Singapore, IEEE SSCI 2013, April 19, 2013.
- Abstract: This tutorial presentation discusses a number of key issues raised in the author's KER article, op. cit.
Robert E. Marks, "Validation and Model Selection: Three Similarity Measures Compared"
Complexity Economics, Vol. 2, May 2013, 41-61.
"There are two types of simulation models: Demonstration models, essentially existence proofs for phenomena of interest, and Descriptive models, that attempt to track dynamic historical
phenomena. Both types require verification. Descriptive models require validation against historical data as well. More broadly, we can think of a process of choosing the "best" of
several models. This paper examines three measures of the similarity of two sets of vectors, here time series. The best known but flawed is the Kullback-Leibler information-theoretic construct. A second measure is what I have called the State Similarity Measure. The third measure is a set-theoretic measure of similarity, the Generalized Hartley Metric. For illustration, we use data from a dynamic simulation model of historical brand rivalry."
Donald E. Stokes, Pasteur's Quadrant: Basic Science and Technological Innovation, Brookings Institution Press, 1997.
"Stokes begins with an analysis of the goals of understanding and use in scientific research. He recasts the widely accepted view of the tension between understanding and use, citing as a model case the fundamental yet use-inspired studies by which Louis Pasteur laid the foundations of microbiology a century ago. ... On this revised, interactive view of science and technology, Stokes builds a convincing case that by recognizing the importance of use-inspired basic research we can frame a new compact between science and government."
Klaus G. Troitzsch,
"Validating Simulation Models"
Proceedings of the 18th European Simulation Multiconference, SCS Europe,
- Abstract: This paper discusses aspects of validating simulation models designed to describe, explain, and predict real-world phenomena.
"Complexity: The Bigger Picture"
Vol. 418, 11 July 2002, p. 131.
In this short essay, Vicsek describes how computer simulation fits
into the scientific enterprise. The goal is to "capture the principal laws
behind the exciting variety of new phenomena that become apparent when the
many units of a complex system interact."
Robert Wallace, Amy Geller, and V. Ayano Ogawa (Eds.), Assessing the Use of Agent-Based Models for Tobacco Regulation, 2015 Report, Institute of Medicine of the National Academies, Washington, D.C. Available by title search from the
National Academies Press.
- Note: For a thoughtful constructive discussion of presentation and evaluation issues for agent-based models developed for policy use, see:
(i) Chapter 3: "Building Effective Models to Guide Policy Decision Making"
(ii) Chapter 4: "An Evaluation Framework for Policy-Relevant Agent-Based Models"
and (iii) Chapter 6: "Data and Implementation Needs for Computational Modeling for Tobacco Control"
and Alessio Moneta,
Empirical Validation of Agent-Based Models: Alternatives and Prospects
Journal of Artificial Societies and Social Simulation, Vol. 10, no. 2,8, March 31, 2007.
- Abstract: This paper addresses a set of methodological problems arising in the empirical validation of agent-based (AB) economics models and discusses how these are currently being tackled. These problems are generic for all those enaged in AB modelling, not just economists. The discussion is therefore of direct relevance to JASSS readers. The paper has two objectives. The first objective is the identification of a set of issues that are common to all modellers engaged in empirical validation. This gives rise to a novel taxonomy that captures the relevant dimensions along which AB modellers differ. The second objective is a focused discussion of three alternative methodological approaches being developed in AB economics -- indirect calibration, the Werker-Brenner approach, and the history-friendly approach -- and a set of (as yet) unresolved issues for empirical validation that require future research.
Introductory Materials on Verification
"Advancing the Art of Simulation in the Social Sciences"
University of Michigan, August 2003.
- Abstract: The author offers advice for doing social
science simulation research, focusing on the programming of a simulation
model, analyzing the results, and sharing the results with others.
Joshua M. Epstein, and
Michael D. Cohen,
Aligning Simulation Models: A Case Study and Results
Computational and Mathematical Organization Theory, Vol. 1, Number 1, 1996, pp. 123-141.
- Abstract: This article develops the concepts and methods of an alignment process ("docking") for computational models. Alignment is used to determine whether two models can produce the same outcomes, which in turn is the basis for critical experiments and tests to determine whether one model can subsume the other. These concepts and methods are illustrated using a model of cultural transmission due to Robert Axelrod and the Sugarscape model developed by Epstein and Axtell.
- Kent Beck, Test-Driven Development: By Example, Addison-Wesley Professional, MA, 240pp., 2002. ISBN: 0-321-14653-0.
- Abstract: This text discusses a software engineering methodology
for code verification called Test-Driven Development that derives from the Extreme Programming (XP)
approach to programming. The basic idea is to write short pieces of code ("unit tests") in parallel with segments
of regular program code that tests these segments to ensure they are running properly. (One example discussed by Beck in the first section of his book is the JUnit facility for Java.) Each time the regular
program code is modified, the unit tests are rerun to help ensure that the modification has not introduced bugs into the existing code.
Unit testing would appear to be particularly important for the kind of iterative program development common in agent-based
modeling due to the complexity of the systems under study.
- José Manuel Galán, Luis R. Izquierdo, Segismundo S. Izquierdo, José Ignacio Santos, Ricardo del Olmo, Adolfo López-Paredes, and Bruce Edmonds, Errors and Artefacts in Agent-Based Modelling
Journal of Artificial Societies and Social Simulation 12(1)1, 2009.
- Abstract: "The objectives of this paper are to define and classify different types of errors and artefacts that can appear in the process of developing an agent-based model, and to propose activities aimed at avoiding them during the model construction and testing phases. To do this in a structured way, we review the main concepts of the process of developing such a model – establishing a general framework that summarises the process of designing, implementing, and using agent-based models. Within this framework we identify the various stages where different types of errors and artefacts may appear. Finally we propose activities that could be used to detect (and hence eliminate) each type of error or artefact."
Kenneth L. Judd,,
"Computationally Intensive Analyses in Economics",
in Leigh Tesfatsion and Kenneth L. Judd (editors),
Handbook of Computational Economics, Vol. 2: Agent-Based Computational
(Table of Contents & Abstracts),
Handbooks in Economics Series, North-Holland, Amsterdam,
Computer technology presents economists with new tools, but also
raises novel methodological issues. This essay discusses the
challenges faced by computational researchers, and proposes some
- J. Gary Polhill, Luis R. Izquierdo, and Nicholas M. Gotts, "The Ghost in the Model (and Other Effects of Floating Point Arithmetic)"
Journal of Artificial Societies and Social Simulation (JASSS), Vol. 8, No. 1, January 31, 2005, electronic journal.
This paper explores the effects of errors in floating point arithmetic in two published agent-based models: the first a model of land use change (Polhill et al. 2001; Gotts et al. 2003), the second a model of the stock market (LeBaron et al. 1999). The first example demonstrates how branching statements with floating point operands of comparison operators create a high degree of nonlinearity, leading in this case to the creation of 'ghost' agents -- visible to some parts of the program but not to others. A potential solution to this problem is proposed. The second example shows how mathematical descriptions of models in the literature are insufficient to enable exact replication of work since mathematically equivalent implementations in terms of real number arithmetic are not equivalent in terms of floating point arithmetic.
Additional Remarks on Floating Point Arithmetic and Agent-Based Models:
- When doing arithmetic on a computer, the uncountably infinite set of
real numbers must somehow be squeezed into a discrete set of isolated numbers
represented in binary floating-point format. The resulting floating-point
errors can lead to some unpleasant surprises for the unwary. For example,
floating-point addition does not obey the associative law, e.g., (0.1 + 0.2)
+ 0.3 can fail to equal 0.1 + (0.2 + 0.3). Moreover, summing 0.05 twenty
times can yield a total that differs from 1. In both cases the problem can
be traced to the fact that simple-looking base-10 numbers such as 0.1 are not
exactly representable in binary floating-point format because they correspond
to infinitely repeating binary numbers.
- Researchers at the James Hutton Institute (previously the Macaulay Land Use Research Institute) in
Aberdeen, Scotland, have been conducting a number of important and interesting studies to investigate the effects of floating-point errors in agent-based models implemented in FEARLUS. A summary of these efforts, with pointers to related papers and demos, can be accessed
(See, also, the above paper by Polhill et al.) Based on their work to date, the researchers conclude that floating-point errors are not likely to be of major importance if a model does not perform many operations and if it does not contain branching statements. However, on a personal note, I would like to warn about the need to be careful also about imported utilities such as pseudo-random number generators. In my ISU electricity research group in 2005 we discovered that a major java source file (EmpiricalWalker.java) in the well-known and frequently imported cern.jet.random package is disastrously susceptible to floating-point addition errors.
- Uri Wilenski and Willian Rand, "Making Models Match: Replicating an Agent-Based Model"
Journal of Artificial Societies and Social Simulation, Vol. 10, No. 4(2), 2007 (electronic journal).
- Abstract: The authors attempt to replicate a classic agent-based model from political science developed by R. Axelrod and R. A. Hammond (2003). They detail their effort to replicate this model and the challenges that arose in recreating the model and in determining if the replication was successful. They conclude by discussing issues for (1) researchers attempting to replicate models and (2) researchers developing models in order to facilitate the replication of their results.
Additional Readings on Empirical Validation and
- Simone Alfarano, Friedrich Wagner, and Thomas Lux, "Estimation of Agent-Based Models: The Case of an Asymmetric
Computational Economics, Vol. 26, 2005, 19-49.
- Abstract: The authors introduce a simple agent-based model of herding behavior in financial markets in which the ubiquitous stylized facts of financial returns (fat tails, volatility clustering) are emergent properties of the interaction among the traders. The simplicity of the model permits the authors to estimate the underlying parameters, since it is possible to derive a closed form solution for the distribution of returns.
Osman Balci, "Verification, Validation, and Testing"
Chapter 10 in J. Banks (Ed.), The Handbook of Simulation, John Wiley & Sons, New York, NY, 1998, 335-393.
The purpose of this chapter is to present principles and techniques for the assessment of accuracy throughout the life cycle of a simulation study. The accuracy quality characteristic is
assessed by conducting verification, validation and testing.
Osman Balci, "Verification, Validation, and Accreditation"
in D.J. Medeiros, E.F. Watson, J.S. Carson, and M.S. Manivannan (Eds.), Proceedings of the 1998 Winter Simulation Conference, 41-48.
"This paper presents guidelines for conducting verification,
validation, and accreditation (VV&A) of modeling and simulation (M&S)
applications. Fifteen guiding principles are introduced to
help the researchers, practitioners and managers better
comprehend what VV&A is all about. The VV&A activities
are described in two M&S life cycles. Applicability
of 77 V&V techniques is shown for the major stages of
the two M&S life cycles. A methodology for accreditation
of M&S applications is briefly introduced."
- Federico E. Bert, Santiago L. Rovere, Charles M. Macal, Michael J. North, Guillermo P. Podestá, Lessons from a Comprehensive Validation of an Agent-Based Model: The Experience of the Pampas Model of Argentinean Agricultural Systems"
Ecological Modeling 273 (2014), 284-298.
"There are few published examples of comprehensively validated large-scale land-use agent-based models (ABMs). We present guidelines for doing so, and provide an example in the context of the Pampas Model (PM), an ABM aimed to explore the dynamics of structural and land use changes in the agricultural
systems of the Argentine Pampas. Many complementary strategies are proposed for validation of ABM's. We adopted a validation framework that relies on two main streams: (a) validation of model processes and components during model development, which involved a literature survey, design based on similar models, involvement of stakeholders, and focused test scenarios; and (b) empirical validation, which involved comparisons of model outputs from multiple realistic simulations against real world data."
- Carlo Bianchi, Pasquale Cirillo, Mauro Gallegati, and Pietro A. Vagliasindi, "Validation and Calibration in ACE Models: An Investigation of the CATS Model"
Working Paper, Universitá di Parma, Italy, May 1, 2005.
- Abstract: This paper discusses validation experiments performed with the CATS model proposed by Gallegati et al. (2003,2004), a model of a dynamic economy comprising many firms and banks. Staring from a sample of Italian firms included in the AIDA database, the authors perform several ex-post validation experiments for the CATS model over the simulation period 1996-2001. The simulation findings are then ex-post validated against actual data using several alternative validation techniques, with results the authors conclude are quite promising.
- H. Peter Boswijk, Cars H. Hommes, and Sebastiano Manzan, Behavioral Heterogeneity in Stock Prices
CeNDEF Working Paper 05-12, University of Amsterdam, 2005.
- Abstract: The authors estimate a dynamic asset pricing model characterized by heterogenous boundedly rational agents using US stock price data from 1871 until 2003. The estimation results support the existence of two expectations regimes: a "fundamentalist regime" in which agents believe in mean reversion of stock prices towards the benchmark fundamental value, and a "chartist trend-following regime" in which agents expect the deviations from the fundamental to trend.
- Daniel G. Brown, Scott Page, Rick Riolo, Moira Zellner, and William Rand, "Path Dependence and the Validation of Agent-Based Spatial Models of Land Use"
International Journal of Geographical Information Science, Vol. 19, No. 2, February 2005, 153-174. A related
slide presentation can be viewed
- Abstract: The authors identify two distinct notions of accuracy of land-use models -- predictive (output) accuracy and process (input) accuracy -- and highlight the tension between them. To balance these two potentially conflicting motivations, they introduce the concepts of an invariant region (where land-use type is almost certain and thus path independent) and a variant region (where land-use depends on a particular series of events and is therefore path dependent). They demonstrate their methods using an agent-based land-use model based on multi-temporal land-use data collected for Washtenaw County, Michigan, USA. They conclude that their methods can help researchers improve their ability to communicate how well their models perform, the situations or instances in which their models do not perform well, and the cases in which their models are unlikely to predict well due to either path dependence or stochastic uncertainty.
Joanna J. Bryson, Yasushi Ando, and Hagen Lehmann "Agent-Based Modelling as Scientific Method: A Case Study Analysing Primate Social Behaviour",
Philosophical Transactions of the Royal Society, B -- Biology, 362(1485):1685-1698, Sept 2007.
- Abstract: For a methodology to be useful to science, it should provide two things: first, a means of explanation, and second, a mechanism for improving that explanation. Agent Based Modeling (ABMB) is a method that facilitates exploring the collective effects of individual action selection. The explanatory force of the model is the extent to which an observed meta-level phenomenon can be accounted for by the behaviour of its micro-level actors. This article demonstrates this methodology can be applied to the biological sciences, that agent-based models like any scientific hypotheses can be tested, critiqued, generalised, or specified. We review the state of the art for ABM as a methodology for biology. We then present a case study based on the most widely-published agent-based model in the biological sciences: Hemelrijk's DomWorld, a model of primate social behaviour.
"Agent-Based Models and Human-Subject Experiments",
in Leigh Tesfatsion and Kenneth L. Judd (editors),
Handbook of Computational Economics, Vol. 2: Agent-Based Computational
(Table of Contents & Abstracts),
Handbooks in Economics Series, North-Holland/Elsevier, Amsterdam,
This chapter examines the relationship between agent-based
modeling and economic decision-making experiments with human
subjects. Both approaches exploit controlled "laboratory" conditions
as a means of isolating the sources of aggregate phenomena. Research
findings from laboratory studies of human subject behavior have
inspired studies using artificial agents in "computational
laboratories" and vice versa. In certain cases, both methods have
been used to examine the same phenomenon. The focus of this chapter
is on the empirical validity of agent-based modeling approaches in
terms of explaining data from human subject experiments. We also
point out synergies between the two methodologies that have been
exploited as well as promising new possibilities.
"Epistemological Perspectives on Simulation",
Journal of Artificial Societies and Social Simulation (JASSS), Volume 8, Issue 4, October 2005.
- Remark: The eight articles in this JASSS special section were originally presented at a workshop of the same title held in July 2004 at the University of Koblenz. A number of the articles address the empirical validation of computer simulation models, including, in particular, the empirical validation of agent-based computer simulations. The contribution by Küppers and Lenhard seems particularly useful in clarifying the differences and common grounds between computer simulation validation in the social and natural sciences.
- Manfred Gilli and Peter Winker, "A Global Optimization Heuristic for Estimating Agent-Based Models"
Computational Statistics and Data Analysis, Vol. 42, 2003, 299-312. See also the
Talk Slides (pdf,359KB)
by Winker, Gilli, and Jeleskovic titled "An Objective Function for Simulation Based Inference on Exchange Rate Data," presented at CEF'06, Limassol, Cyprus, June 23, 2006.
- Abstract: The authors introduce a global optimization algorithm for a "goodness of fit" objective function arising from the simulation-based indirect estimation of the parameters of agent-based financial market models. As an illustration of the algorithm, the authors report parameter estimation results for a specific agent-based model of the DM/US-$ foreign exchange market.
- Jakob Grazzini and Matteo Richiardi, "Consistent Estimation of Agent-Based Models by Minimum Distance"
Working Paper No. 130, Laboratorio R. Revelli, Collegio Carlo Alberto, March 2013.
Two difficulties arise in the estimation of AB (Agent-Based) models: (i) the criterion function has no simple analytical expression, and (ii) the aggregate properties of the model cannot be analytically understood. The first one calls for
simulation-based estimation techniques; the second requires additional statistical testing in order to ensure that the simulated quantities are consistent estimators of the
theoretical quantities. The possibly high number of parameters involved and the non-linearities in the theoretical quantities used for estimation add to the complexity of the problem. As these difficulties are also shared, though to a different extent, by DSGE models, we first look at the lessons that can be
learned from this literature. We identify simulated minimum distance (SMD) as a practical approach to estimation of AB models, and we discuss the conditions which ensure consistency of SMD estimators in AB models."
- Volker Grimm, Uta Berger, Donald L. DeAngelis, J. Gary Polhill, Jarl Giske, and Steven J. Railsback, "The ODD Protocol: A Review and First Update"
Ecological Modelling, Vol. 221, 2010, 2760-2768.
The ‘ODD’ (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less
subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging,
and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
- Dominique Gross and Roger Strand, "Can Agent-Based Models Assist
Decisions on Large-Scale Practical Problems? A Philosophical Analysis",
Vol. 5/No. 6, July/August 2000, 26-33. Published article available from
- Nicholas R. Jennings,
"On Agent-Based Software Engineering"
Artificial Intelligence 117 (2000), 277-296, copyright © 2002
Elsevier Science B.V. All rights reserved.
- Abstract: "Agent-based computing represents an exciting new
synthesis both for Artificial Intelligence (AI) and, more generally, Computer
Science. It has the potential to significantly improve the theory and the
practice of modeling, designing, and implementing computer systems... The
standpoint of this analysis is the role of agent-based software in solving
complex, real-world problems. In particular, it will be argued that the
development of robust and scalable software systems requires autonomous
agents that can complete their objectives while situated in a dynamic and
uncertain environment, that can engage in rich, high-level social
interactions, and that can operate within flexible organizational
Jack P.C. Kleijnen, "Verification and Validation of Simulation Models"
European Journal of Operational Research 82, 1995, 145-162.
"This paper surveys verification and validation of models, especially simulation models in operations research. ... A biography with 61 references is included."
George B. Kleindorfer and Ram Ganeshan, "The Philosophy of Science and Validation in Simulation"
in G.W. Evans, M. Mollaghasemi, E.C. Russell, and W.E. Biles (Eds.), Proceedings of the 1993 Winter Simulation Conference, 50-57.
This study examines various relevant positions regarding validation for computer simulation models.
- Stephen Lansing and James N. Kremer (1993), "Emergent Properties of
Balinese Water Temple Networks: Coadaptation on a Rugged Fitness
Landscape", American Anthropologist, Vol. 95, pp. 97-114.
Published article available at
- Abstract: Over hundreds of years, Balinese farmers
have developed an intricate hierarchical network of "water temples" dedicated
to agricultural deities in parallel with physical transformations of their
island deliberately undertaken to make it more suitable for growing irrigated
rice. The water temple network plays an instrumental role in the
coordination of activities related to rice production. Representatives of
different water temple congregations meet regularly to decide cropping
patterns, planting times, and water usage, thus helping to synchronize
harvests and control pest populations. Lansing and Kremer develop an
ecological simulation model to illuminate the system-level effects of the
water temple network, both social and ecological. Their anthropological
study illustrates many important agent-based modeling concepts, including
emergent properties, fitness landscapes, co-adaptation, and the effects
of different institutional designs.
- Note: For an analysis and critique of this article, see Marco Janssen (2007),
"Coordination in Irrigation Systems: An Analysis of the Lansing-Kremer Model
Agricultural Systems 93: 170-190.
- Averill M. Law (2007), Simulation Modelling and Analysis, Fourth Edition, McGraw-Hill Higher Education.
- Abstract (From the Publisher): "This thoroughly up-to-date guide addresses all aspects of a simulation study, including modeling, simulation languages, validation, input probability distribution, and analysis of simulation output data. Full scale treatments of manufacturing systems simulation, simulation software, and animation are also included along with useful and instructive case studies."
- Roberto Leombruni, Matteo Richiardi, Nicole J. Saam, and Michele Sonnessa, "A Common Protocol for
Agent-Based Social Simulation"
Journal of Artificial Societies and Social Simulation 9(1), 2006. The published article is available
- Abstract: Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. Along the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulation.
- Franco Malerba, Richard R. Nelson, Luigi Orsenigo, and Sidney G. Winter, "`History-Friendly' Models of Industry Evolution: The Computer Industry"
Industrial and Corporate Change, Vol. 8, No. 1 (1999), 3-41.
- Abstract: "The model presented in this paper is the first of a new generation of evolutionary economic models: `history-friendly' models. History-friendly models are formal models that aim to capture, in stylized form, qualitative and `appreciative' theories about the mechanisms and factors affecting industry evolution, technological advance, and institutional change put forth by empirical scholars of industrial economics, technological change, business organization and strategy, and other social scientists. In this paper we have analyzed the long-term evolution of the computer industry... ."
- David Midgley, Robert E. Marks, and Dinesh Kunchamwar, "The Building and Assurance of Agent-Based Models: An Example and Challenge to the Field"
Journal of Business Research, Special Issue: Complexities in Markets, Vol. 60 (2007), 884-893.
- Abstract: "The assurance - the verification and validation - of agent-based models is difficult, because of the heterogeneity of the agents, and the possibility of the emergence of new patterns of macro behavior as a result of the interactions of agents at the micro level. We use an agent-based model of the complex interactions among consumers, retailers, and manufacturers to explore issues of model assurance. Our explorations indicate two challenges for the agent-based models field. The first challenge is to address the critical issue of software verification. The second challenge is to overcome the many methodological challenges that exist in empirically validating these models, some of which we will outline in our paper. We will also propose a method based on the Genetic Algrorithm to address both these challenges, but our experiments, and the lack of good data for many kinds of agents, suggest a minimalist approach to building and assuring agent-based models in general."
Tuncer I. Oren, "Concepts and Criteria to Assess Acceptability of Simulation Studies: A Frame of Reference"
Communications of the ACM, Special Issue: Simulation Modeling and Statistical Computing, Guest Edited by N. Adam, Vol. 24, No. 4, April 1981, 180-189.
This study provides a comprehensive and systematic overview of the concepts and criteria related to the assessment of the acceptability of simulation studies.
- Hassan Qudrat-Ullah, "Structural Validation of System Dynamics and Agent-Based Simulation Models"
in Y. Merkuryev, R. Zobel, and E. Kerckhoffs (eds.), Proceedings, 19th European Conference on Modelling and Simulation (ECMS), 2005.
- Abstract: "Simulation models are becoming increasingly popular in the analysis of important policy issues including global warming, population dynamics, energy systems, and urban planning. The usefulness of these models is predicated on their ability to link observable patterns of behavior of a system to micro-level structures. This paper argues that structural validity of a simulation model - right behavior for the right reasons - is a stringent measure to build confidence in a simulation model regardless of how well the model passes behavior validation tests. That leads to an outline of formal structural validity procedures available but less explored in the system dynamics `repertoire.' An illustration of a set of six tests for structural validity of both system dynamics and agent-based simulation models follows."
- Juliette Rouchier,
"Data Gathering to Build and Validate Small-Scale Social Models for Simulation.
Two Ways: Strict Control and Stake-Holders Involvement"
Working Paper, GREQAM, Marseille, October 13, 2005.
- Abstract: Currently in agent-based modeling (ABMB) it has become the norm to assess results against actual data and to build hypotheses by offering some empirical facts to justify the model construction. In this paper the author discusses two distinct approaches that researchers are taking to validate ABM representations of small-scale interaction settings: quantitative statistical validation through controlled human-subject experiments; and an iterative participatory modeling approach called "companion modeling" that engages stakeholders and researchers in repeated rounds of field study and data gathering, role-playing games, agent-based model development and implementation, and computational experiments.
Robert G. Sargent, "Some Approaches and Paradigms for Verifying and Validating Simulation Models"
in B.A. Peters, J.S. Smith, D.J. Medeiros, and M.W. Rohrer (Eds.), Proceedings of the 2001 Winter Simulation Conference, 106-114.
"In this paper we discuss verification and validation of
simulation models. The different approaches to deciding
model validity are described, two different paradigms that
relate verification and validation to the model development
process are presented, the use of graphical data statistical
references for operational validity is discussed, and a recommended
procedure for model validation is given."
- Alexander Smajgl and Olivier Barreteau (Eds.), Empirical Agent-Based Modelling -- Challenges and Solutions: Volume 1, the Characterisation and Parameterisation of Empirical Agent-Based Models
(JASSS Book Review, HTML),
Springer-Verlag, Berlin, 2014.
This book poses the following challenge: How to develop large-scale complex models that are informed by empirical data both in the definition and the calibration phases? To address this challenge, the authors propose a Characterisation and Parametrisation (CAP) framework, i.e., a context-independent description of the key characterisation and parameterisation steps and decisions that modelers are confronted with while developing empirically-based agent-based models.
D.E. Stevenson, "A Critical Look at Quality in Large-Scale Simulations"
Computing in Science and Engineering, 1(3), IEEE Computer Society, May-June 1999, 53-63.
"The role of management and science in simulation
development must be changed. Software
engineering is meant to produce software by a
manufacturing paradigm, but this paradigm simply
cannot deal with the scientific issues. This
article examines the successes and failures of
software engineering. I conclude that process does
not develop software, people and their tools do.
Second, software metrics are not meaningful
when the software’s purpose is to guarantee the
world’s safety in the nuclear era. Finally, the
quality of simulations must be based on the quality
of insights gained from the revealed science.
Aristotle talked about it 2,300 years ago; it is
time to listen. I propose a category-theory definition
- Claudia Werker and Thomas Brenner, "An Advanced Methodology for Heterodox Simulation Models Based on Critical Realism"
Working Paper 0401, Papers on Economics & Evolution, Max Planck Institute of Economics, Evolutionary Economics Group, Jena, Germany, 2004.
- Abstract: This paper develops an advanced methodology that makes the results of simulation models in heterodox economics more reliable and acceptable. This methodology copes with the specific characteristics of simulation models in heterodox economics, in particular with inherent uncertainty. The methodology is based on critical realism, because it treats inherent uncertainty by categorizing empirical events into underlying structural driving forces. Data is center-stage in the methodology, because it is used to infer assumptions and implications. Illustrative examples inlude the development of industries in different countries.
- Claudia Werker and Thomas Brenner, "A Practical Guide to Inference in Simulation Models"
Working Paper 0602, Papers on Economics & Evolution, Max Planck Institute of Economics, Evolutionary Economics Group, Jena, Germany, 2006.
- Abstract: This paper introduces a categorization of simulation models. It provides an explicit overview of the steps that lead to a simulation model. We highlight the advantages and disadvantages of various simulation approaches by examining how they advocate different ways of constructing simulation moels. To this end, the paper discusses a number of relevant methodological issues, such as how realistic simulation models are obtained and which kinds of inference can be used in a simulation approach. Finally, the paper presents a practical guide on how simulation should and can be conducted.
Design of Computational Experiments
- Bart Husslage, Gijs Rennen, Edwin R. van Dam, and Dick den Hertog,
Space-Filling Latin Hupercube Designs for Computer Experiments
Working Paper No. 2006-18, ISSN 0924-7815, Department of Econometrics and Operations Research, Tilburg University, March 2006.
- Abstract: In the area of computer simulation, Latin hypercube designs play an important role.
In this paper the class of maximin Latin hypercube designs is considered. Up to now only several two-dimensional
designs and designs for some small number of points are known for this class. Using periodic designs
and simulated annealing, we extend the known results and construct approximate maximin Latin hypercube
designs for up to ten dimensions and for up to 100 design points. All of these designs can be
Jack P.C. Kleijnen, "An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis"
European Journal of Operational Research 164(2), September 2005, 826-834.
"This review surveys `classic' and `modern' designs for experiments with simulation models. Classic designs...assume `a few' factors (not more than 10 factors with only `a few' values per factor (no more than five values. ... Modern designs...allow `many factors' (more than 100), each with either a few or `many' (more than 100) values."
Jack P.C. Kleijnen, Design and Analysis of Simulation Experiments, 2nd Revised Edition, Springer, 2015, 322pp.
"This is a new edition of Kleijnen's advanced expository book on statistical methods for the Design and Analysis of Simulation Experiments (DASE). Altogether, this new edition has approximately 50 percent new material not in the original book. More specifically, the author has made significant changes to the book's organization, including placing the chapter on Screening Designs immediately after the chapters on Classic Designs, and reversing the order of the chapters on Simulation Optimization and Kriging Metamodels. The latter two chapters reflect how active the research has been in these areas."
- Daniel Kornhauser, Uri Wilensky, and William Rand, "Design Guidelines for Agent Based Model Visualization"
Journal of Artificial Societies and Social Simulation, Vol. 12, No. 2 1, March 2009.
This paper provides agent-based modeling visualization design guidelines in order to improve visual design with ABM toolkits. The guidelines are illustrated using a simple redesign of a NetLogo agent-based modeling visualization.
I. Salle, M. Yildizoglu, "Efficient Sampling and Metamodeling for Computational Economic Models"
Cahiers du GREThA, 2012-18.
"Extensive exploration of simulation models comes at a high computational cost, all the more when the model involves a lot of parameters. Economists usually rely on random explorations,
such as Monte Carlo simulations, and basic econometric modelling to approximate the properties of computational models. This paper aims at providing guidelines for the use of a much more parsimonious method, based on an efficient sampling of the parameters space - a design of experiments (DOE), associated with a well-suited metamodel - kriging. We analyze two simple economic models using this approach to illustrate the possibilities offered by it. Our appendix gives a sample of the R-project code that can be used to apply this method on other models."
- Thomas J. Santner, Brian J. Williams, and William I. Notz, The Design and Analysis of Computer Experiments,
Springer Series in Statistics, 2003, 283pp. ISBN: 978-0-387-95420-2.
- Abstract: This book describes methods for designing and analyzing experiments conducted using computer code in lieu of a physical experiment. It discusses how to select the values of the factors at which to run the code (the design of the computer experiment) in light of the research objectives of the experimenter. It also provides techniques for analyzing the resulting data so as to achieve these research goals. It illustrates these methods with code made available to the reader.
- Leigh Tesfatsion, Experimental Design: Basic Concepts and Terminology
These brief notes were prepared for an undergraduate class on agent-based computational economics with no previous exposure to running
Iterative Participatory Modeling
- Olivier Barreteau et al., "Our Companion Modelling Approach"
Journal of Artificial Societies and Social Simulation (JASSS), Vol. 6, No. 1, 31 March 2003 (electronic journal).
- Abstract: This article discusses an iterative participatory approach to the modeling of complex systems, referred to as "companion modeling." The companion modeling approach envisions multidisciplinary researchers and stakeholders engaging together in repeated looping through a four-stage cycle: field study and data analysis; role-playing games; agent-based model design and implementation; and intensive computational experiments. The new aspect of companion modeling relative to other participatory modeling approaches is the emphasis on modeling as an open-ended collaborative learning process. The modeling objective is to help stakeholders manage complex problems over time through a continuous learning process rather than to attempt the delivery of a definitive problem solution.
O. Barreteau, G. Abrami, W. Daré, P. Garin, V. Souchère, and C. Werey, "Collaborative modelling as a boundary institution to handle institutional complexities in water management", in H. Karl et al. (eds.)
Restoring Lands – Coordinating Science, Politics and Action, Springer, 2012.
O. Barreteau, P. W. G. Bots, and K. A. Daniell, "A framework for
clarifying `participation' in participatory research to prevent its
rejection for the wrong reasons", Ecology and Society 15(2):1, 2010. Available online
O. Barreteau, F. Bousquet, M. Etienne, V. Souchère, and P.
d'Aquino, "Companion modelling: a method of adaptive and participatory research", in: M. Etienne (Ed.), Companion Modelling. A Participatory Approach to Support Sustainable Development. QUAE, Versailles, France, 2011,
O. Barreteau, P.W.G. Bots, K.A. Daniell, M. Etienne, P. Perez, C. Barnaud, D. Bazile, N. Becu, J.-C. Castella, W. Daré,and G. Trebuil, "Participatory approaches and simulation of social complexity", in B. Edmonds and R. Meyer, R. (Eds.), A Handbook on: Simulating Social Complexity. Springer, 2013.
O. Barreteau, C. Le Page, and P. Perez, P., "Contribution of simulation and gaming to natural resource management issues: an introduction", Simulation and Gaming 38 (2007), 185-194
Pierre Bommel et al., "A Further Step Towards Participatory Modeling: Fostering Stakeholder Involvement in Designing Models by Using Executable UML", Journal of Artificial Societies and Social Simulation
17 (1) 6, published online 31 January 2014.
- M. Etienne (ed.), Companion Modelling. A Participatory Approach to Support Sustainable Development, QUAE, Versailles, France, 2011.
R.L. McCown, Locating agricultural decision support systems in the troubled past and socio-technical complexity of `models for management'
Agricultural Systems, Vol. 74, 2002, 11–25.
This paper considers how knowledge about decision support systems determined in non-agricultural domains might help agricultural models to better serve farm management.
- Scott Moss, "Alternative Approaches to the Empirical Validation of Agent-Based Models"
Journal of Artificial Societies and Social Simulation 11(15), 2008. The published article is available
This paper draws on the metaphor of a spectrum of models ranging from the most theory-driven to the most evidence-driven. The issue of concern is the practice and criteria that will be
appropriate to validation of different models. In order to address this concern, two modelling approaches are investigated in some detail - one from each end of our metaphorical
spectrum. Windrum et al.(2007) (http://jasss.soc.surrey.ac.uk/10/2/8.html) claimed strong
similarities between agent based social simulation and conventional social science - specifically econometric - approaches to empirical modelling and on that basis considered how econometric validation techniques might be used in empirical social simulations more broadly. An alternative, the approach of the French school of 'companion modelling' associated with Bousquet, Barreteau, Le Page and others, engages stakeholders in the modelling and validation process.
- Alexey Voinov and Francois Bousquet, "Modelling with Stakeholders"
Environmental Modelling & Software 25 (2010), 1268-1281.
- Alexey Voinov, "Participatory Modeling: What, Why, and How?"
ITC Faculty of Geo-Information Science and Earth Observation, University of Twente, March 2010.
Copyright © Leigh Tesfatsion. All Rights Reserved.