Modeling and Idealization


Who is a Modeler?, British Journal for Philosophy of Science, 58, 207-233.

 

Many standard philosophical accounts of scientific practice fail to distinguish between modeling and other types of theory construction. This failure is unfortunate because there are important contrasts among the goals, procedures, and representations employed by modelers and other kinds of theorists. We can see some of these difference intuitively when we reflect on the methods of theorists such as Vito Volterra and Linus Pauling on one hand, and Charles Darwin and Dimitri Mendeleev on the other. Much of Volterra's and Pauling's work involved modeling; much of Darwin's and Mendeleev's did not. In order to capture this distinction, I consider two examples of theory construction in detail: Volterra's treatment of post-WWI fishery dynamics and Mendeleev's construction of the periodic system. I argue that modeling can be distinguished from other forms of theorizing by the procedures modelers use to represent and to study real-world phenomena: indirect representation and analysis. This differentiation between modelers and non-modelers is one component of the larger project of understanding the practice of modeling, its distinctive features, and the the strategies of abstraction and idealization it employs.

 

 

Three Kinds of Idealization, Journal of Philosophy, 104 (12) 639-59

 

Philosophers of science increasingly recognize the importance of idealization: the intentional introduction of distortion into scientific theories. Yet this recognition has not yielded consensus about the nature of idealization. The literature of the past thirty years contains disparate characterizations and justifications, but little evidence of convergence towards a common position.

Despite this lack of convergence, consensus has clustered around three types of positions, or three kinds of idealization. While their proponents typically see these positions as competitors, I will argue that they actually represent three important strands in scientific practice. Philosophers disagree about the nature of idealization because there are three major reasons scientists intentionally distort their models and theories; all three kinds of idealization play important roles in scientific research traditions.

The existence of three kinds of idealization means that some classic, epistemic questions about idealization will not have unitary answers. We cannot expect a single answer to questions such as: What exactly constitutes idealization? Is idealization compatible with realism? Are idealization and abstraction distinct? Should theorists work to eliminate idealizations as science progresses? Are there rules governing the rational use of idealization, or should a theorist’s intuition alone guide the process? However, the three kinds of idealization share enough in common to allow us to approach the answers to these questions in a unified way. The key is to focus not just on the practice and products of idealization, but on the goals governing and guiding it. I call these goals the representational ideals of theorizing. Although they vary between the three kinds of idealization, attending to them will help us better understand the epistemic role of this practice.

 


Models for Modeling, under review

 

Contemporary literature in philosophy of science has begun to emphasize the practice of modeling, which differs in important respects from other forms of representation and analysis central to standard philosophical accounts. This literature has stressed the constructed nature of models, their autonomy, and the utility of their high degrees of idealization. What this new literature about modeling lacks, however, is a comprehensive account of the models that figure in to the practice of modeling. This paper offers a new account of both concrete and mathematical models, with special emphasis on the intentions of theorists, which are necessary for evaluating the model-world relationship during the practice of modeling. Although mathematical models form the basis of most of contemporary modeling, my discussion begins with more traditional, concrete models such as the San Francisco Bay model.

 

 

The Structure of Tradeofs in Model Building, forthcoming in Synthese.

 

Despite their best efforts, scientists may be unable to construct models that simultaneously exemplify every theoretical virtue. One explanation for this is the existence of tradeoffs: relationships of attenuation that constrain the extent to which models can have such desirable qualities. In this paper, we characterize three types of tradeoffs theorists may confront. These characterizations are then used to examine the relationships between parameter precision and several types of generality. We show that several of these relationships exhibit tradeoffs and discuss what consequences those tradeoffs have for theoretical practice, especially in sciences that study complex phenomena.

 


The Division of Cognitive Labor and the Social Structure of Science


 

Epistemic Landscapes and the Division of Cognitive Labor , forthcoming in Philosophy of Science (with Ryan Muldoon)

 

Because of its complexity, contemporary scientific research is almost always tackled by groups of scientists, each of which works in a different part of a given research domain. We believe that understanding scientific progress thus requires understanding this division of cognitive labor. To this end, we present a novel agent-based model of scientific research in which scientists divide their labor to explore an unknown epistemic landscape. Scientists
aim to climb uphill in this landscape, where elevation represents the significance of the results discovered by employing a research approach. We consider three different search strategies scientists can adopt for exploring the landscape. In the first, scientists work alone and do not let the discoveries of the community as a whole influence their actions. This is compared with two social research strategies, which we call the follower and maverick strategies. Followers are biased towards what others have already discovered, and we find that pure populations of these scientists do less well than scientists acting independently. However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform both of the other strategies. Finally, we show that in mixed populations, mavericks stimulate followers to greater levels of epistemic production, making polymorphic populations of mavericks and followers ideal in many research domains.



Robustness and Idealization in Models of Cognitive Labor, under review (with Ryan Muldoon)


A critical analysis of Philip Kitcher’s and Michael Strevens’ models of cognitive labor.


 

Review of Science, Truth, and Democracy, by Philip Kitcher. Angewandte Chemie 2000, 114 (16), 3189-3190 (German) and Angewandte Chemie International Edition in English 2002, 41 (16) 3064-3066.

 

 

Philosophy of Biology and Chemistry

 

Philosophy of Chemistry, with Paul Needham and Robin Hendry, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.).


Chemistry is the study of the structure and transformation of matter. When Aristotle founded the field in the 4th century BCE, his conceptual grasp of the nature of matter was tailored to accommodate a relatively simple range of observable phenomena. In the 21st century, chemistry has become the largest scientific discipline, producing over half a million publications a year ranging from direct empirical investigations to substantial theoretical work. However, the specialized interest in the conceptual issues arising in chemistry, hereafter Philosophy of Chemistry, is a relatively recent addition to philosophy of science.


Philosophy of chemistry has two major parts. In the first, conceptual issues arising within chemistry are carefully articulated and analyzed. Such questions which are internal to chemistry include the nature of substance, atomism, the chemical bond, and synthesis. In the second, traditional topics in philosophy of science such as realism, reduction, explanation, confirmation, and modeling are taken up within the context of chemistry. This article details both of these parts.


Robustness Analysis , Philosophy of Science, 73, 730-742.

 

Modelers often rely on robustness analysis, the search for predictions common to several independent models. Robustness analysis has been characterized and championed by Richard Levins and William Wimsatt, who see it as central to modern theoretical practice. The practice has also been severely criticized by Steven Orzack and Elliott Sober, who claim that it a non-empirical form of confirmation, only effective under unusual circumstances. This paper addresses Orzack and Sober's criticisms by giving a new account of robustness analysis and showing how the practice can identify robust theorems. Once the structure of robust theorems is clearly articulated, it can be shown that such theorems have a degree of confirmation, despite the lack of direct empirical evidence for their truth.

 

 

 

Forty Years of `The Strategy': Levins on Model Building and Idealization. Biology and Philosophy, 21, 623-645.

 

This paper is an interpretation and defense of Richard Levins' ``The Strategy of Model Building in Population Biology,'' which has been extremely influential among biologists since its publication forty years ago. In this article, Levins confronted some of the deepest philosophical issues surrounding modeling and theory construction. By way of interpretation, I discuss each of Levins' major philosophical themes: the problem of complexity, the brute-force approach, the existence and consequence of tradeoffs, and robustness analysis. I argue that Levins' article is concerned, at its core, with justifying the use of multiple, idealized models in population biology.

 

 

 

The Robust Volterra Principle. Philosophy of Science, forthcoming (with Ken Reisman).

 

Theorizing in ecology and evolution often proceeds via the construction of multiple idealized models. To determine whether a theoretical result actually depends on core features of the models and is not an artifact of simplifying assumptions, theorists have developed the technique of robustness analysis, the examination of multiple models looking for common predictions. A striking example of robustness analysis in ecology is the discovery of the Volterra Principle, which describes the effect of general biocides in predator-prey systems. This paper details the discovery of the Volterra Principle and the demonstration of its robustness. It considers the classical ecology literature on robustness and introduces two individual-based models of predation, which are used to further analyze the Volterra Principle. The paper also introduces a distinction between parameter robustness, structural robustness, and representational robustness, and demonstrates that the Volterra Principle exhibits all three kinds of robustness.

 

 

 

Challenges to the Structural Conception of Chemical Bonding. Philosophy of Science, forthcoming.

 

While the covalent bond plays a central role in chemical predictions, interventions, and explanations, it is a difficult concept to define precisely. This paper investigates the structural conception of the covalent bond, which says that bonding is a directional, sub-molecular region of electron density located between individual atomic centers that is responsible for holding the atoms together. Several approaches to constructing molecular
models are considered in order to determine which features of the structural conception of bonding, if any, are robust across these models. The paper concludes that key components of the structural conception are absent in all but the simplest quantum mechanical models of molecular structure, seriously challenging the conception's viability.

 

 


Water is not H2O. in Philosophy of Chemistry: Synthesis of a New Discipline, a volume of Boston Studies in the Philosophy of Science.

 

Qualitative Theory and Chemical Explanation. Philosophy of Science, 71, 1071-1081.

 

Roald Hoffmann and other theorists claim that we we ought to use highly idealized chemical models (“qualitative models”) in order to increase our understanding of chemical phenomena, even though other models are available which make more highly accurate predictions. I assess this norm by examining one of the tradeoffs faced by model builders and model users—the tradeoff between precision and generality. After arguing that this tradeoff obtains in many cases, I discuss how the existence of this tradeoff can help us defend Hoffmann's norm for modelling.

 

 

Why Not a Philosophy of Chemistry? Review of Of Minds and Molecules, American Scientist, 89 (6), November 2001.

  

Chemistry and the Scientific Method. Review of Chemical Discovery and the Logicians’ Program by Jerome A. Berson. Chemical and Engineering News, 82 (12), 2004.

 


Public Understanding of Science



The Intelligent Design controversy: lessons from psychology and education.Trends in Cognitive Science, Vol. 10, No. 2. (February 2006), pp. 56-57 (with Tania Lombrozo and Andrew Shtulman)

 

The current debate over whether to teach Intelligent Design creationism in American public

schools provides the rare opportunity to watch the interaction between scientific knowledge and intuitive beliefs play out in courts rather than cortex. While it¡¦s easy to believe the controversy stems only from ignorance about evolution, a closer look confirms what decades of research in cognitive and social psychology have already taught us: that the relationship between understanding a claim and believing a claim is far from simple. Research in education and psychology confirms that a majority of college students fail to understand evolutionary theory, but also finds no support for a relationship between understanding evolutionary theory and accepting it as true. We believe the intuitive appeal of Intelligent Design owes as much to misconceptions about science and morality as it does to misconceptions about evolution. To support this position we present a brief tour of misconceptions: evolutionary, scientific, and moral.

 

 

The Importance of Understanding the Nature of Science for Accepting Evolution. Evolution: Education and Outreach, Vol. 1, No. 3., pp. 290-298 (with Tania Lombrozo and Anna Thanukos).


Many students reject evolutionary theory, whether or not they adequately understand basic evolutionary concepts. We explore the hypothesis that accepting evolution is related to understanding the nature of science. In particular, students may be more likely to accept evolution if they understand that a scientific theory is provisional but reliable, that scientists employ diverse methods for testing scientific claims, and that relating data to theory can require inference and interpretation. In a study with university undergraduates, we find that accepting evolution is significantly correlated with understanding the nature of science, even when controlling for the effects of general interest in science and past science education. These results highlight the importance of understanding the nature of science for accepting evolution. We conclude with a discussion of key characteristics of science that challenge a simple portrayal of the scientific method and that we believe should be emphasized in classrooms.


 

History of Science

  

Richard Rufus’s Theory of Mixture. Chemical Explanations: Characteristics, Development, Autonomy, volume 988, Annals of the New York Academy of Sciences, with Rega Wood.

 

Interpreting Aristotle on Mixture: Problems of Elemental Composition from Philoponus to Cooper. Studies in the History and Philosophy of Science 35 (2004) 681-706, with Rega Wood.

Aristotle’s On Generation and Corruption raises a vital question: how is mixture, or what we would now call chemical combination, possible. It also offers an outline of a solution to the problem and a set of criteria that a successful solution must meet. Understanding Aristotle’s solution and developing a viable peripatetic theory of chemical combination has been a source of controversy over the last two millennia. We describe seven criteria a peripatetic theory of mixture must satisfy: uniformity, recoverability, potentiality, equilibrium, alteration, incompleteness, and the ability to distinguish mixture from generation, corruption, juxtaposition, augmentation, and alteration. After surveying the theories of Philoponus (d.574), Avicenna (d.1037), Averroes (d.1198), and John M. Cooper (. circa 2000), we argue for the merits of Richard Rufus of Cornwall’s theory. Rufus (.1231-1256) was a little known scholastic philosopher who became a Franciscan theologian in 1238, after teaching Aristotelian natural philosophy as a secular master in Paris. Lecturing on Aristotle’s De generatione et corruptione, around the year 1235, he offered his students a solution to the problem of mixture that we believe satisfies Aristotle’s seven criteria.

 

 

 

Chemistry

 

Synthesis of beta, beta-Dimethylated Amino Acids Utilizing the 9-Phenylfluorenyl Protecting Group. Journal of Organic Chemistry, 1999, 64, 4362-4369, with N.H. Kawahata and M .Goodman.

Optically pure β,β-dimethylated amino acid building blocks with functionalized side chains have been prepared from D-aspartic acid. The dimethylation was accomplished by regioselective dialkylation of 9-phenylfluorenyl (PhFl)-protected aspartate diesters. The bulk of the PhFl protecting group also allowed for a variety of functional group manipulations to be carried out on the side chain without affecting the Cα ester of the aspartate. As a result, the derivatives of the following novel amino acids were synthesized in this study: β,β-dimethyl-D-aspartic acid, β,β-dimethyl-D-homoserine, 3,3-dimethyl-D-2,4-diaminobutyric acid, β,β-dimethyl-D-lysine, β,β-dimethyl-D-homoglutamate, β,β-dimethyl-D-ornithine, and 3,3-dimethylazetidine-2-carboxylic acid. The β,β-dimethylated amino acids were synthesized in high enantiomeric excess as determined by coupling the novel building blocks to chiral reagents.


 

Online Papers

 
line-height: 0px; " class="bumper">