Baron, J. (1993). Why teach thinking? - An essay. (Target article with commentary.) Applied Psychology: An International Review, 42, 191-237.*

Why teach thinking? - An essay

Jonathan Baron

Abstract

Recent efforts to teach thinking could be unproductive without a theory of what needs to be taught and why. Analysis of where thinking goes wrong suggests that emphasis is needed on `actively open-minded thinking,' including the effort to search for reasons why an initial conclusion might be wrong, and on reflection about rules of inference, such as heuristics used for making decisions and judgments. Such instruction has two functions. First, it helps students think on their own. Second, it helps them to understand the nature of expert knowledge, and, more generally, the nature of academic disciplines. The second function, largely neglected in discussions of thinking instruction, can serve as the basis for thinking instruction in the disciplines. Students should learn how knowledge is obtained through actively open-minded thinking. Such learning will also teach students to recognize false claims to systematic knowledge.
Introduction
The last decade has seen a rebirth of the idea that schools should teach students how to think. Several newsletters and journals have begun, with such titles as Thinking and problem solving (Erlbaum) and Thinking (Institute for Philosophy and Children, Montclair State University). In addition, main line educational publications such as Educational Leadership have had a large number of articles about thinking. A major review of recent programs was published (Nickerson, Perkins, and Smith, 1986) and updated (Nickerson, 1989). Regular and special conferences are devoted to the subjects, and at least one of these, the International Conference on Thinking, has led to several published volumes (e.g., Perkins, Lochhead, & Bishop, 1987).
In the U.S. and Canada, the idea of critical thinking, creative thinking, or reflective thinking has been incorporated into many statements of goals by state and provincial education authorities, and some effort has been made to implement these ideas on a large scale (Brown, 1991). In Venezuela, L. A. Machado (see Machado, 1980) convinced the Campins administration to institute several experiments on the teaching of thinking between 1980 and 1984, some of which led to positive results (Herrnstein, Nickerson, de Snchez, & Swets, 1986).
But this is, of course, a rebirth, not a new idea. Ancient Greek students learned mathematics and philosophy with the idea that it would benefit them in whatever they did, even if they did not become mathematicians or philosophers. Scholastic education emphasized logic, and in 1662, the Port Royal Logic was published (Arnauld, 1964), a general guide to good thinking that was reprinted several times in several languages. In the 19th century, the idea of formal discipline dominated educational theory. The work of Thorndike and Woodworth (1901) was done in opposition to this tradition, but, a little later, John Dewey (e.g., 1933) resurrected the idea as a justification for thoughtful education in general rather than as a justification for teaching Latin and geometry to everyone. After World War II, the idea of `critical thinking' was emphasized as a way of teaching resistance to propaganda (Presseissen, 1986). Writers such as DeBono (1976) have been busy teaching thinking to anyone who would learn it, often corporations and institutions other than schools.
Of course, I have been speaking of current educational publications and conferences, not practice. Even after Dewey was no longer in fashion, many teachers and schools continued to implement his ideas, and many teachers have discovered successful ways to teaching thinking without the benefit of anyone else's educational theory.
My concern here is with the psychological rationale for such education. But pure psychology will not suffice here. We must also consider the broader educational and social context. I shall provide what I think is a somewhat novel rationale for thoughtful education. This rationale leads to many of the same recommendations as the most widely accepted rationales, but an understanding of the purposes of thoughtful education can lead to some differences.
If my differences with current opinion were to be summarized briefly, they would amount to an amendment to the current rationale in terms of the relation between thinking and democratic citizenship. Dewey felt that thoughtful education was necessary in a democracy because citizens needed to be able to think about the things that affected them and others. I agree with this. But things have changed a bit since Dewey wrote, and we can expect further changes in the same direction. The work of the world has become more complex, more difficult for the average person to understand. Citizens in a democracy must rely more and more on experts. This leads us to think about the nature of expertise itself, and the relationship between expertise and critical thinking. I shall suggest that good thinking forms the foundation of legitimate expertise, so that an understanding of thinking is necessary for citizens who must increasingly be guided by experts.
I shall begin with a summary of what I take to be the current rationale for the teaching of thinking. I shall then discuss the need for an understanding of expertise, provide some examples of the kind of instruction that is implied by my view, and close with a summary of the implications of my view for the nature of education in the disciplines.
The Current Rationale
In previous writing, I have attempted to summarize the rationale for the teaching of thinking (Baron, 1985a, 1988, 1990a,b), and I shall summarize that rationale here. It is in the spirit of Dewey, and it is intended to be compatible with other recent writing on the subject (Nickerson, 1989; Paul, 1984; Perkins, in press; Schrag, 1988), although my terms are somewhat different. Again, my current purpose is not to overturn this rationale but to go beyond it, so I still accept what I am about to say.
What is Thinking?
Thinking is a mental activity that is used to resolve doubt about what to do, what to believe, or what to desire or seek. Thinking about what to do is decision making. Thinking about what to believe is part of learning. (Some learning, perhaps most, does not involve thinking but is, rather, an automatic consequence of certain experiences.) Other kinds of thinking about what to believe are scientific thinking, hypothesis testing, and making inferences about correlations or contingencies. Thinking about what to desire is not studied much, but it is analogous to thinking about beliefs: if decisions are based on beliefs and desires, then we can think separately about each of the elements.
Thinking is only one type of action, and only one kind of determinant of overt behavior, among many others. Thus, the theory of thinking is only a part of the theory of action, and action errors are only sometimes cause by thinking errors (Reason, 1990; Zapf et al., 1992). Thinking is worthy of special attention, however, because, I shall suggest, certain principles of good thinking are common across domains. This property creates opportunities for the teaching of thinking.
Decision making is the final common path of thinking, in this pragmatic view, but thinking about what to desire is part of thinking about what to do. Creative tasks such as art and science involve decision making, but they also involve considerable thinking about what to desire or seek. Scientists who do basic research must think not only about how to test their hypotheses but also about what questions are worth hypothesizing about. Creative artists spend much of their time thinking about what they want to achieve in a given work or series of works (Perkins, 1981).
Thinking consists of search and inference. We search for possibilities, evidence, and goals. Possibilities are potential answers to the doubt that inspired the thinking: they are potential courses of action, potential beliefs, or potential desires. Evidence is whatever bears on the strength of the possibility. Goals are the standards by which we evaluate possibilities in the light of the evidence. This process of evaluation is inference. Figure 1 shows this structure: the evidence affects the strengths of the possibilities, in a way that is determined by the goals.
- Figure 1 deleted. -
For example, in the case of a decision, the possibilities are alternative options or courses of action, the goals are the desires or personal goals that are relevant. The evidence consists of facts about possible consequences and their probability of occurrence and about the extent to which each possibility will achieve each relevant goal. In buying a car, the possibilities are the cars one might buy, some goals are price, reliability, appearance, and cost of operation, and the evidence comes from magazines, test drives, dealers, and friends.
Good Thinking
Good thinking is likely to achieve the goals of the thinker, but more than this can be said. Certain ways of thinking are generally better at achieving the thinker's goals than other ways. In particular, several normative models of inference have been developed (Baron, 1988). These are standards for evaluating inferences. Logic is such a standard, but traditional logic is limited to inferences about beliefs held with certainty from other beliefs held with certainty. Probability theory allows us to deal with the more typical case of uncertainty. The various forms of utility theory allow us to evaluate decisions. As yet, no generally accepted normative model can be applied to the choice of goals or desires, but Baron (in press a) makes some suggestions about this.
We can regard search itself as an action, and we can apply the normative model of utility theory to search itself. The main conclusion to be drawn from this application (Baron, 1985a) is that search has costs as well as benefits. Too much search, past the point at which expected costs exceed expected benefits, impairs the achievement of goals. We can think too much.
Of course, the determinants of successful thinking lie in the domain-specific details. And good thinking cannot help much when specific knowledge is lacking, although it can help both in the acquisition of that knowledge and in its effective application once it is acquired (Baron, 1985a). Here, however, I am concerned with general properties of thinking that cut across subject matters, although I shall also discuss briefly how even these general properties must be adapted to specific subjects.
Myside Bias
When we compare thinking to normative models, we find several systematic departures in both search and inference. The most general and pervasive departure is a bias toward possibilities that are initially strong. Following Perkins, I call this `myside bias,' although I do not mean to imply that it always favors the thinker's side in a dispute. In the extreme, depressives are subject to this bias when they interpret evidence as favoring the hypothesis that bad outcomes are their fault. Myside bias is not always present, but it accounts for many common failures of thinking, so it is worth trying to prevent. I shall give several examples.
One example is the selective exposure effect (Frey, 1986). People tend to select information that would support their present views if that evidence were selected randomly. For example, political liberals tend to read articles written by liberals, and vice versa. Then, having selected the evidence by its content, they ignore this fact when they revise their beliefs, so they change their beliefs as if the evidence were selected randomly. They do not discount the evidence for the fact that it was selected in a biased way. (If they did, they would probably not spend so much time searching for it, for its expected benefits would be small.) The same sort of selective search is involved when the evidence comes from our own memory. We tend to think of evidence that supports possibilities that are already strong or beliefs that we desire to be true. Again, wishful thinking (Kunda, 1990) requires forgetting the basis of our selection of evidence when we respond to the evidence.
Several experiments indicate that this sort of bias underlies both motivated and unmotivated errors. In one commonly used experimental paradigm, subjects are instructed to make some judgment under various conditions: without special instructions, with instructions to think of reasons why their initial judgment might be correct, with instructions to think of why it might be incorrect, or with instructions to think or reasons on both sides. (Not all experiments use all four conditions.) The typical finding is that the latter two conditions improve performance over the first two. Koriat, Fischhoff, & Lichtenstein (1980) found that thinking of reasons on the `other side' reduced inappropriate extreme confidence in the answers to objective questions. Hoch (1985) found that it reduced optimistic biases in the estimation of likely job offers by business students. Arkes et al. (1988, also Slovic & Fischhoff, 1977, in a similar paradigm) found that it reduced the hindsight effect, the tendency to say that one would have predicted an outcome, once one has learned the outcome. And Anderson (1982) found that it reduced the tendency to ignore total discrediting of the evidence on which one has based an earlier judgment.
These results suggest that one source of many of these errors is a bias toward initial or desired conclusions. Telling people to think of evidence for these conclusions has no effect, because that is what people do anyway. Telling them to think of counterevidence helps, whether or not we tell them to think of positive evidence too.
Perkins (in press) has provided additional evidence for the existence of such myside bias. When subjects were asked to list arguments relevant to some question of policy, such as whether their state should enact a law requiring return of glass bottles for recycling, they listed far more arguments on their own side than on the other side of the issue. When they were pushed to list extra arguments, they listed far more such arguments on the other side. So their initial bias was the result of lack of effort, not lack of knowledge. A course that taught people to be fair to both sides and thorough in their thinking substantially reduced this bias, although other kinds of courses (e.g., a course in debating) did not.
Myside bias is related to other psychological concepts. Janis describes similar errors in group decisions about policies in both government (1982) and business (1989), and Jervis (1976) provides other examples. A measure of `integrative complexity' (Suedfeld & Tetlock, 1977; Tetlock, 1983, 1984, 1985) is also related. This measure can be applied to all sorts of texts, such as political speeches. It measures `differentiation' and `integration,' and it is conceived as a series of stages, with the middle stages characterized by more differentiation compared to the lower stages and the higher stages characterized by more integration compared to the middle stages. The absence of differentiation is essentially the same as myside bias. It is the failure to acknowledge the other side. Of interest is the fact that most of the results obtained with this measure (e.g., Tetlock, 1983, 1984, 1985) are found largely in the differentiation measure; the integration measure plays little role, because the higher stages are rarely found.
Myside bias seems to occur in inference as well as search. For example, Lord, Ross, and Lepper (1979) found that subjects given arguments on both sides of a controversial question (whether the death penalty should be used) responded more to evidence on their own side. At the end of being presented with mixed evidence, then, subjects who initially disagreed became polarized, so that they disagreed even more. An even more dramatic result has been found by Batson (1975): a subgroup of high-school girls who believed in the divinity of Christ became more convinced of this belief when they were given evidence (in the form of an article about new scrolls found near the Dead Sea) that the virgin birth and the resurrection were hoaxes.
Overgeneralization
Myside bias is not the only general failing that leads to errors. Baron (1990b) has reviewed the evidence for another source, the overgeneralization (i.e., misuse) of rules of inference. Inferences are typically made by the use of heuristic rules. (Most warrants, in the sense of Toulmin [1958], are heuristics in that they require qualifiers.) For example, we decide on punishment on the basis of the magnitude of the harm done, we stick to the status-quo (`a bird in the hand is worth two in the bush'), and we judge harmful acts to be worse than equally harmful omissions. These heuristic rules are discovered by individuals and passed on from one person to another because they work in certain situations, perhaps in most situations. Their working can be understood when their results are best at achieving the thinker's goals. For example, it usually best to punish harmful acts more than we punish harmful omissions, because harmful acts are more usually intentional and their outcomes are easier to foresee, so they are more easily deterred. But people use these heuristics without fully understanding their relationship to their purposes. As a result, people often apply them in cases in which they do not achieve their usual purposes as well as some alternative might do so. Thus, people sometimes judge acts to be worse than omission even when intention and foreseeability are held constant (Spranca, Minsk, & Baron, 1991).
Subjects were easily persuaded that this was an error when they were asked to reflect on the purpose of the information and when they realized that they would make the same choice no matter how the test turned out. In other examples that I have attributed to overgeneralization of heuristic rules, such as the bias toward omissions or the tendency to seek retribution even in the absence of deterrence, subjects are not so easily persuaded that the rule in question is being misused (Baron, 1992, in press b; Baron & Ritov, 1992). Some heuristics have acquired a certain commitment, through processes not yet fully understood. (In this regard, they may differ from merely mindless habits in the sense of Langer, 1989). Still, at some point in the course of learning these heuristics, people might benefit from asking themselves about purposes, that is (in terms of the theory of thinking outlined above), they might benefit from searching for goals.
The term `overgeneralization' is somewhat misleading. Overgeneralization of one rule goes hand in hand with undergeneralization of whatever rule should be used in its place. However, the term is still useful because it brings to mind other examples of inappropriate transfer of a rule, e.g., Luchins (1942). Overgeneralization errors were taken by Wertheimer (1959) to be a sign of misunderstanding, of `blind' learning or transfer. We may account for such misunderstanding in terms of ignorance of (or failure to recall) the arguments about why a rule of inference serves its purposes (Perkins, 1986).
Wertheimer showed that such overgeneralization can apply to rules of inference learned in school as well as `naive' rules. (For a beautiful example of overgeneralization of the law of large numbers by sophisticated students of statistics, see Ganzach & Krantz [1991, pp. 189-190].) For example, he found that students who had learned the formula for the area of a parallelogram (base times height) would apply the same formula to other figures that could not be made into rectangles in the same way that a parallelogram can be (by cutting off a triangle on one side and moving it to the other). The area rule worked for the parallelogram because it could be shown in this way to serve the purpose or goal of finding area. The derivation of the rule involves the subgoals of making the parallelogram into a rectangle and conserving area. When the rule was applied elsewhere, it did not serve the purpose of finding area, and the subgoal of making the figure into a rectangle could not be achieved. Analogously, when punishment is given for causing harm even in the absence of the deterrence, the purpose that could justify the punishment is absent. Thus, failing to ask whether a rule serves its purpose can lead to mistaken application of the rule.
In sum, two general errors in thinking are myside bias and what we might call purposelessness, in particular, the use of inferential heuristics without understanding their purposes in terms of goal achievement. The first error may be countered by instruction in what I have called actively open-minded thinking (which seems to be identical to what Nickerson, 1989, calls `actively fair-minded' thinking). The second error might be corrected by reflection on thinking itself, by the study of heuristics and their purposes in an actively open-minded way (Baron, 1990a,b). If schools can improve thinking in these ways, then people will think better and achieve their individual and collective goals better in their work, in their personal lives, and in their public lives as citizens.
What I have offered here is a partial justification of the teaching of thinking in terms of psychological studies that compare thinking to normative standards. In essence, we cannot teach thinking well without knowing what is wrong with it, what needs to be corrected through education. We teach Latin or calculus because students do not already know how to speak Latin or find integrals. But, by any reasonable description of thinking, students already know how to think, and the problem is that they do not do it as effectively as they might.
Beliefs about Thinking
How should thinking be taught if these are the problems? Clearly, this is a topic for research, for we are capable of measuring the types of errors made and evaluating their response to instruction (as done by, for example, Nisbett, Fong, Lehman, & Cheng, 1987; Larrick, Morgan, & Nisbett, 1990; as well as by many others in work reviewed by Nickerson et al., 1985, and by Nickerson, 1989).
Let me make one more suggestion here. I have argued (Baron, 1989, 1991) that one of the determinants of how people think is how they believe that they ought to think. For example, I have found that people who are prone to myside bias in thinking about abortion, for example, tend to evaluate one-sided thinking as better than two-sided thinking, even when the one-sided thinking favors a position opposite from their own (Baron, 1989). Similarly, Schommer (1990) found that those who believed that reflection was unnecessary were relatively poor at comprehension of difficult passages. These sorts of result suggests that what is most needed is to present people with convincing arguments about why certain kinds of thinking are more likely to avoid errors and to achieve goals more than other kinds of thinking. These arguments should result in an understanding of the theory of thinking that underlies the instruction (such as the theory of actively open-minded thinking that I have outlined here). Students can even be evaluated for such an understanding, regardless of whether they accept the theory.
It might be said that people have the goal of thinking in a certain way. But I would suggest that this is in fact a subgoal, if it is a goal at all. People desire to think in one way or another because they think it is good for some other purpose. They should be encouraged to reflect upon their goals and the real goals behind them. If people still have the goal of using certain heuristics or not considering the other side or supporting their initial beliefs even after such reflection, then educators have done what they can.
Abuses
Although the view I have presented may represent a consensus among some writers, the battle to improve thinking through education has not been won. In practice, many teachers who try to improve thinking have little understanding of the theory I have just sketched or of any competing theory. Instead, what has developed, in the U.S. at least, is a kind of subculture of thinking, maintained through in-service workshops and conference presentations by self-appointed experts on thinking who have, by and large, little contact with the scholarly literature on thinking in either psychology, philosophy, or educational theory itself (e.g., Dewey, 1933; Schrag, 1988). There is much of value in what these people have discovered on their own and gleaned from third-hand accounts of scholarship. But I fear that much of it is off the point and will not succeed.
Examples of what this culture has to offer are found in Brown (1991). These new advocates of thinking tend to hold the following views: Students will learn to think if they are challenged to think. Opportunities for thinking are everywhere. Discussions can promote thinking. In class discussions, students should respond to each other rather than just to the teacher. School should build on what children bring to it, on their interests, as exemplified in the whole-language approaches to literacy instruction. All subjects must be taught thoughtfully; thinking cannot be taught as a separate subject. Evaluation too must emphasize thinking, whether this is done through projects or through essay examinations that require it. Above all, learning must be `active.'
What is missing from this new conventional wisdom is a common understanding of what the problem is, why it needs to be corrected, and how these various prescriptions will do the job. The impetus for teaching thinking in the U.S. comes from many desires: beating the Japanese in commerce; having demagogue-proof citizens; having more creativity in the popular arts; etc. But advocates of thinking instruction have no standard account of how these prescriptions will meet even these concrete goals.
Take the value of discussion, for example. Students should, of course, know how to conduct themselves in a group discussion in which different points of view are expressed. Perhaps some students will learn that others have different points of view, or that it is possible to disagree politely. But is this the big problem with the students that schools are now turning out? When they participate in political discussions or solve problems together in the workplace, are they inhibited or intolerant?
Perhaps, but the evidence for this is lacking, and in any case these reasons are not the driving force behind the support of discussion in the classroom. What is? In what ways are students deficient when they leave school, such that more discussion will remove the deficiency? How is discussion supposed to accomplish this end? In saying this, I do not assume that the deficiency can be measured with a multiple-choice test, but I do think that educational methods must be justified by arguments about their ultimate consequences for what students do and how they think in the future, even if we cannot easily measure these consequences.
Arguably, discussion in a class of several students (as opposed to a tutorial, or an academic colloquium) is filled with talk by people who don't have much to say that others can learn from. Many good students find discussions boring and time-wasting for this reason. Many of the discussions that Brown (1991) records have these qualities. They seem pointless. It can be argued that the students have not yet learned how to carry on a good discussion, but good discussions are rare, even in graduate school. Can we rely on them as a major tool of reform?
Similarly, student projects surely create interest in schoolwork, but they can degenerate into (relatively) time-wasting activities such as coloring in the cover in detail or looking through magazines for pictures to cut out. Projects are often used because they seem more `real.' The trouble is that most of them are not real. They have as little resemblance to the worlds of work, personal relationships, or politics as do most of the other activities that children do in school. And even such resemblance would not insure real relevance, which often comes only at the expense of abstraction. Nor do projects necessarily involve much thought. The support for projects may come from a desire to give every child a chance to do something successfully. But, if such success is important, it can be arranged in more intellectual areas.
Much the same could be said for the whole-language approach. When this approach is used as an excuse to neglect phonics instruction, it could even work to the long-run detriment of children's reading (see Adams, 1990). It may feel good to the teachers, who see children having fun and doing things that are officially declared `authentic' because they are close to home. But it may also miss opportunities to expand children's worlds, opportunities that will come most easily after children have mastered the skill of decoding print. And does the whole-language approach involve more thinking than figuring out how to pronounce and understand a new word in context?
Most seriously, why should we assume that simply doing more thinking in school helps children think better out of school? Is it the case that children simply do not think, so that they do not learn how to think unless they think in school? Does thinking improve with practice? The theory I have sketched suggests that students get lots of practice thinking all the time and that, therefore, additional practice alone will not help unless it is coupled with explicit discussion of the kind of thinking that should be done, and why. Without such redirection, students will simply practice their mistakes.
Another type of abuse is, fortunately, becoming less common. This is the idea that thinking is exactly a skill that can improve with practice at its subskills. Students who are the victims of this approach fill out worksheets in which they supposedly practice the basic elements of thinking such as finding similarities and differences or deciding whether an object belongs in a category. The source of the analysis of thinking that generates these exercises is difficult to find. It comes most directly from educational writers such as Bloom (1956), but the ultimate source is probably Aristotle as worked over by the scholastic philosophers. Those who apply this approach seem unaware that psychology and philosophy have had some other ideas in the last 500 years. And even if the analysis were correct, the evidence is firmly against the idea that anything general can be learned from practice at component subskills (Baron, 1985b).
The Growth of Knowledge
In the remainder of this essay I shall try to provide a different kind of rationale for the teaching of thinking than I (or to my knowledge others) have provided so far. I hope that this will provide the kind of clarity of connection between means and ends that is lacking in the conventional wisdom and to some extent in my own earlier proposals.
In the Middle Ages, one individual might have been able to learn everything of the world of scholarship that was worth learning. This is not to say that anyone knew what that was. Then, as now, the scholarly world was filled with false starts, fads, and nonsense that is difficult to recognize without benefit of hindsight. But if someone could sift what was valuable from the rest, a student might have learned it all. Indeed, the ideal of the `Renaissance man' was someone who did just that.
Today, the world of scholarship has grown like the human population. No individual can hope to master more than a small fraction of the useful scholarship that has accumulated over the years and across fields. In the future, this trend will continue.
Most of education is about scholarship, that is, about the work of writers who have tried to add to knowledge self-consciously, however digested and packaged their work might be, and however long ago they wrote. The mathematics curriculum, for example, is an accumulation of at least three thousand years of mathematical insights (Hogben, 1951). History and social studies represent the work of historians and social scientists. Even the language curriculum, beyond the first few years of teaching basic skills of native and foreign languages, is based heavily on scholars who wrote about literature itself.
This much is recognized by the widespread consensus that students must learn how to find things out, how to use a library, how to use computer databases, and so on. The assumption here is that students will be able to get the information that they need to make decisions on their own. One problem, though, is that using the library takes time. In some cases, a graduate degree in economics or chemistry is required to understand some public issue. A person who `knows how to use the library' could in principle acquire the knowledge-equivalent of such a degree, but that is beside the point. Ultimately, most people have to rely on experts. As knowledge increases, people will have to rely on experts more and more.
The conventional approach to thinking instruction tries to deal with this problem by teaching people to evaluate judgments critically. Often this amounts to a kind of cynicism that looks only at whether the expert in question stands to gain from whatever she is saying. This is relevant, of course, but so is much else. What students - and the adults they become - often miss is a positive understanding of the basis of expert opinion. As a result, even experts of different kinds have trouble understanding each other (Roberts, 1992).
The Basis of Expertise
The modern cognitive psychology of expertise does not help much here. The literature on expertise is full of comparisons of experts and novices, and much has been learned. Experts have richer representations of problem domains (Voss, Tyler, & Yengo, 1983), they carry out certain operations more automatically and more quickly (Bryan & Harter, 1899). They are able to classify textbook problems according to the type of solution required (Chi, Feltovich, & Glaser, 1981). When solving problems, they work forward instead of backward (Sweller, Mawer, & Ward, 1983).
Another type of research attempts to distinguish successful and less successful experts in terms of cognitive processes (e.g., Ceci & Liker, 1986; Charness, 1981) or personality traits (e.g., Klemp & McClelland, 1986). In some cases, the differences found are most easily ascribed either to specific knowledge or to biologically determined capacities (such as mental speed - see Baron, 1985a, ch. 5). In other cases, the results provide evidence for the role of actively open-minded thinking. Klemp and McClelland, for example, derived empirically a taxonomy for distinguishing successful and less successful managers. A number of the traits they found seem to represent thorough and open-minded search for possibilities, evidence and goals, for example: `makes strategies, plans steps to reach a goal'; `seeks information from multiple sources to clarify a situation'; `sees implications, consequences, alternatives, or if-then relationships'; and `identifies the most important issues in a complex situation' (Klemp & McClelland, 1986, Table 3, p. 41).
Although these results are useful, they do not tell us what makes knowledge qualify as expert knowledge, or, in other words, what gives experts their legitimate authority. They do not distinguish true expertise and false expertise. Undoubtedly a study of expert and novice astrologers would yield the same sorts of results as studies of expert and novice physicists.
To understand the justification of expertise in the relevant way, we must turn to philosophy. Karl Popper (e.g. 1962) has had the most to say about the difference between true and false theoretical understanding, although he is concerned mostly with science. He holds that true scientific theories are falsifiable. The most useful theories make the boldest predictions, those that are most likely to be falsified but which are then not falsified. More generally, science works because it is, as an institution, self-critical. Scientific theories that experts learn acquire their legitimacy from the fact that they have withstood attempts to prove them wrong.
Popper himself has been criticized in many ways. Lakatos (1978), for example, argued that scientific theories are essentially never falsifiable in the way that Popper suggests, since they can always be modified to deal with discrepant results by changing some nonessential assumption and thereby protecting their core. This argument in itself, however, does not dispute the central insight that science is successful because it is self-critical. (Lakatos himself does not emphasize this aspect, though.) Other views of the nature of scientific inquiry can explain how science advances even in the absence of decisive falsification, e.g., by acquiring probabilistic knowledge (Horwich, 1982; Baron, 1985a, ch. 4).
Another problem with Popper is that his arguments are limited to science. He does not explain how his own work is an advance over previous accounts or how someone else might improve on his account. Putnam (1981) argues that other disciplines work in much the same way as science, although the particular form of self-criticism might be different. Thus, philosophy can and does advance.
Notice that Putnam's claim is essentially that disciplines advance through what I have called actively open-minded thinking. Just as in thinking done by an individual, the way to avoid error within a discipline is to consider criticisms of current views and to allow those views to change in response to criticism. Reflection on methods of inferences is helpful as well, just as it is for individuals in considering their own heuristic methods. So an understanding of good thinking, imparted through the schools, has a second function. In addition to teaching students how to think themselves, about their own concerns, it teaches them to understand the nature of the expert knowledge on which they must increasingly rely.
Some disciplines, such as astrology, do not advance through good thinking and reflection on their methods. Such fields may change over time, but the changes do not result from self-criticism and reflection. The problem with such disciplines is not in the abstractness of their theories or in the complexity of their applications but, rather, in the role of self-criticism in their development (Horton, 1967; Popper, 1962). Self-criticism, of course, requires rules of inference by which criticisms can be weighed.
Understanding Expertise
Although all legitimate disciplinary knowledge must be subject to a process of actively open-minded criticism, the understanding of methods of thinking and inference must be somewhat specific to the disciplines. Students need to learn what counts as evidence for a mathematician, a historian, an environmental scientist, a medical researcher, etc., and they must learn some of the methods of inference specific to each of these fields. For example, in applied fields such as medical research and economics, and in retrospective fields such as history and archaeology, the level of certainty required for conclusions to be taken as warranted is lower than in experimental physics or cognitive psychology. In computer science and education, arguments often concern practicability rather than truth. In psychology and philosophy, but not in most branches of chemistry, much of the argumentation concerns the proper statement of questions, e.g., `How can we state Skinner's law of effect so that it is testable and not tautological?' In linguistics and philosophy, but not in physics, agreement with intuition is sometimes an important argument. Some fields rely heavily on statistical inference, and others do not.
In each field, the structure of thinking involves goals (questions), possibilities (hypotheses, conjectures), evidence (arguments), and inference from the evidence about the possibilities in the light of the goals (Baron, 1988). But the fields differ in their goals, the types of possibilities that are considered, the kinds of evidence that are brought to bear, and the forms of inference that are used. Thus, what counts as a good argument may vary from field to field, in part because of different goals. So the standards of good thinking are the same in that a search for alternative possibilities and counterevidence is always required, but the standards by which inferences are made from what is found - and hence what is searched for - may vary considerably. We may think of actively open-minded thinking as a general schema that requires filling in for a given case.
Although students cannot acquire all the knowledge of all disciplines, they can be expected to understand the rules of inference of the major disciplines. In this way they will know where to look for what they need to know, and they will be able to understand the strengths of expert knowledge as well as the weaknesses of fallible human experts.
Some Instructional Illustrations
Let me illustrate the sort of education this argument implies with a few examples. The first comes from a high-school lesson in American history by Kevin O'Reilly (Swartz, 1987). The students were asked to read a passage from their textbook describing the 1775 battle of Concord and Lexington, the beginning of the Revolutionary War against England. The passage asserted that the battle began when `[t]he English fired a volley of shots that killed eight patriots.' After calling attention to some of the loaded language in the passage (patriots, etc.), O'Reilly then gets the students to focus on the question of who, in fact, fired the first shot by presenting them with a passage from Churchill (1956-58), which said, `The colonial committees were very anxious not to fire the first shot, and there were strict orders not to provoke open conflict with the British regulars. But in the confusion someone fired.'
The class was then asked how they might resolve the discrepancy. The class quickly arrived at the idea of using eyewitness testimony. O'Reilly had anticipated this by photocopying all known eyewitness accounts for the class, complete with background information about the origin of each account. The accounts indeed conflict, and it turns out to be impossible to know with certainty what happened, although probabilities can be assigned. In this case, the political bias of each witness is a relevant consideration, but so are other factors such as the extent to which the total account agrees with other accounts and the amount of time that elapsed until the account was given. Although a superficial handling of this lesson could teach that `everyone is biased and there is no real truth,' a more adept handling could give students some insight into the nature of historical inquiry itself. The students are doing a bit of the work that historians do - without the dust from poking around in old libraries, of course - and they will thereby come to understand how historical knowledge grows out of actively open-minded thinking about evidence. The teacher must at some point make sure that students appreciate that this is what they are doing, if necessary by saying so explicitly.
Many other teachers do this sort of thing. Constance Kamii (as described in a conference presentation some years ago) has taught elementary arithmetic by letting students invent their own algorithms through class discussion. Sometimes students invent unconventional but successful algorithms, such using negative numbers in subtraction: 23 - 16 would thus yield -3 for the ones column (6-3, with the sign reversed) and 1 for the tens column, so the result would be 10 - 3 or 7. Although this method works it was eventually abandoned in favor of the usual `regrouping' method because the latter is more efficient, requiring fewer symbols to be written down. The students thus learned that the methods of mathematics are not arbitrary but are, rather, designs to serve purposes (Perkins, 1986). Some of these purposes are direct requirements of the task, such as making sure that numbers are conserved or that results are unique, but others are matters of computational efficiency. Although almost everyone is an `expert' at this sort of arithmetic, students even at this level can begin to understand where knowledge comes from.
At a higher level, students can learn where more advanced mathematics comes from through the same kind of inquiry. Lempert (1990) describes her work on teaching what amounts to actively open-minded thinking about mathematics to a fifth-grade class. The instruction involved the conscious creation of a `participation structure,' in which students learned, mostly implicitly, a set of social rules for stating conjectures, alternatives, arguments, and revisions, e.g., `I want to question so-and-so's hypothesis.'
In one series of classes, for example, Lempert asked the students to tabulate the squares of the integers from 1 to 100 (with calculators). The students then spent 45 minutes trying to find patterns in the table they had made. The students became interested in the last digits. These digits alternated odd and even. Squares of multiples of 10 always ended in 0, and squares of numbers ending in 5 always ended in 5. In between the zeros and fives, the squares always ended in 1, 4, 9, 6, or 6, 9, 4, 1. Students then asked themselves whether these patterns would hold for all integers, and they managed to find arguments for these hypotheses. Later, the students were encouraged to examine higher powers. Lempert suggested to students that they discuss what the last digit of 74 and 75 would be, without calculating. Although the students initially disagreed about they answers to these questions, they were able to come to agreement about them on the basis of general principles.
Although Lempert does not suggest it, open-ended questions such as `tell me about the last digits of the squares' could be used as examination questions, even if the students had not previously encountered the particular problem. (A well taught student at a high level should spontaneously think of generalizing to higher powers.) These kinds of explorations for the basis of mathematical knowledge. In mathematics, the self-criticism comes when mathematicians ask `how do we know?' Students can come to understand this through such activities as this. They do not need to reinvent all of mathematics. They need to reinvent just enough of it so that they can understand the nature of mathematical invention and discovery.
In the sciences, students already spend time doing projects and laboratory exercises. Students and teachers alike are typically uncertain about the purpose of these exercises, although students usually find them to be more fun than doing worksheets or reading textbooks. Some of these exercises could be used to repeat the course of scientific inquiry as a self-critical enterprise.
For example, Newton's experiment showing that white light can make a spectrum if passed through a prism is often cited as showing that white light is a mixture. But Newton was not the first to show this, and this was not his problem. Rather, he was concerned with an alternative explanation of the basic result. Instead of separating the white light into its components, the prism could modify the light, thus creating the colors. To cast doubt on this account, Newton showed that the white light could be reconstituted with another prism. (This still does not tie down the theory, but it renders certain alternatives less plausible.) The prism experiments could therefore be used to show how experiments arise out of a process of criticism: finding alternative explanations and then thinking of experiments to test these alternatives.
A combined exercise in high-school mathematics, European history, and physics could lead students through the discovery of Newton's law of gravity and its application to planetary motion. Such a course could begin with data about the regression of Mars's orbit. Ptolomy's theory of orbitals could be worked out in detail for this one case. Then Copernicus's alternative could be introduced. Students would come to understand that this was mathematically equivalent to Ptolomy's theory and had only the advantage of simplicity (Margolis, 1987). Hence, Copernicus was unsure that he was correct, and those who resisted his theory did not necessarily do so out of superstition and obedience alone. Galileo's observation of the phases of Venus helped to make the Copernican account more plausible but even this was not convincing (as Tycho Brahe showed). It fell to Kepler and Newton to provide both a firmer mathematical foundation for the Copernican theory and a physical account in terms of gravitation. By this account, of course, the earth did not really `rotate around the sun,' but the earth and sun both rotated around a common center of gravity. By the time that students understood this, of course, they would be using calculus, as it was first used, thereby understanding why its invention was needed. They would also understand how scientific theories grow from criticism of earlier theories, arguments based on plausibility, refutation of those arguments, and so on.
The teaching of social studies is particularly problematical because it derives ultimately from the social sciences, which are relatively new and still controversial. The tendency to `water down' the social studies curriculum is therefore great. In the 1960s, a group of scholars in the U.S., funded by the National Science Foundation, developed `Man: A course of study' (MACOS), which, among other things, drew heavily on the social sciences as they were taught at the university level. It was too controversial to be widely implemented. Among other problems, it presented a view of culture as variable, with many possible options. Conservatives in the U.S. did not want their own culture presented as merely one option among many.
Understanding of the nature of knowledge in the social sciences (except history, perhaps) is thus largely limited to those who have been to college. (At least this is true in the U.S.) This is particularly unfortunate because the social sciences, especially economics, form much of the expertise relevant to government. To most citizens, the economists consulted routinely by government leaders might as well be astrologers and fortunetellers who use the stars or tarot cards rather than computers to predict the future. Students need to go through miniature exercises in economics and the other social sciences in order to understand the origin of this kind of expertise.
What the Educated Person should Know
The argument I have made provides another justification for the understanding of actively open-minded thinking and reflection on methods of inference. Indeed, it reinforces the argument made on the basis of correlations between beliefs about thinking and the conduct of thinking. Those results suggested that the teaching of thinking involved not just teaching a particular set of skills or habits or methods but, rather, or in addition, imparting to students an understanding of the value of these methods. My argument suggests that the same sort of understanding is required for students to grasp the nature of true expertise, for them to be able to distinguish true and false experts, and for them to know the limits of true expertise.
One way to learn these things is to experience them firsthand, in carefully prepared lessons. This is `active learning,' but it is not just any sort of activity associated with learning. It is active learning with a specific purpose. It can replace much of the active learning now going on in the form of laboratory exercises and projects without interfering with the learning of substance.
In addition, students need to learn the geography of expertise. Even those who make it through graduate school are often ignorant of the fact that they are making statements about issues on which someone else is an expert. Psychologists, for example, frequently step into philosophy as if the discipline didn't exist, and economists do the same with psychology. As it is, the secondary curriculum in most countries is a watered-down version of the university curriculum of decades (or centuries) earlier, with no other particular justification. One way to remedy this problem is for universities to work harder, with the help of outside funds, to inform high-school teachers and students of the full range of their activities.
Universities themselves could do more to teach students to understand expertise in fields outside of their own. In the U.S., college students are required to take courses in several different kinds of disciplines. Ideally, such `distribution requirements' should allow students to learn not only about the methods of inference in each discipline they study but also about how to learn the methods of yet other disciplines and how to ask good questions of experts in each discipline. Learning about these things will facilitate teamwork among members of different disciplines. If I am correct, what is central in all of these types of learning are the kinds of evidence used to establish claims in a discipline and the kinds of inferences and criticisms that are made. If students focus on these, they will quickly learn what a discipline is about, even if they know little of its substance (although they must know some of the substance if only to understand the methods of inquiry). Many current courses may actually do well at imparting this sort of knowledge.
I do not mean to imply that an understanding of the different sources of expert knowledge is all that students need. They do need to learn how to do some things on their own, and they need to learn enough so that they do not need to take the time to look up fundamental facts or acquire basic skills after they leave school. But my point here has been to emphasize the rest of what they need, which is an understanding of the way in which knowledge is acquired in the disciplines that are necessary for the operation of the modern world. Conceivably, efforts to teach for such understanding will even increase the acquisition for basic facts, because students will have a framework for integrating them.
In sum, I have tried to provide a rationale for teaching the standards of actively open-minded thinking in terms of learning about the disciplines themselves. This rationale does not depend on the assumption that such thinking is not done enough in daily life. Rather, it depends on the idea that an understanding of thinking is essential to an understanding of scholarship itself, which is what most education is (and should be) about.
References
Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.
Anderson, C. A. (1982). Inoculation and counterexplanation: Debiasing techniques in the perseverance of social theories. Social Cognition, 1, 126-139.
Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988). Eliminating the hindsight bias. Journal of Applied Psychology, 73, 305-307.
Arnauld, A. (1964). The art of thinking (Port Royal Logic) (J. Dickoff & P. James, Trans.). Indianapolis: Bobbs-Merrill. (Original work published 1662)
Baron, J. (1985a). Rationality and intelligence. Cambridge University Press.
Baron, J. (1985b). What kinds of intelligence components are fundamental? In S. F. Chipman, J. W. Segal, & R. Glaser (Eds.), Thinking and learning skills. Vol. 2: Research and open questions (pp. 365-390). Hillsdale, NJ: Erlbaum.
Baron, J. (1988). Thinking and deciding. New York: Cambridge University Press.
Baron, J. (1989). Actively open-minded thinking versus myside bias: Causes and effects. Paper presented at the Fourth International Conference on Thinking, San Juan, Puerto Rico.
Baron, J. (1990a). Thinking about consequences. Journal of Moral Education, 19, 77-87.
Baron, J. (1990b). Harmful heuristics and the improvement of thinking. In D. Kuhn (Ed.), Developmental perspectives on teaching and learning thinking skills, pp. 28-47. Basel: Karger.
Baron, J. (1991). Beliefs about thinking. In J. F. Voss, D. N. Perkins, & J. W. Segal (Eds.), Informal reasoning and education, pp. 169-186. Hillsdale, NJ: Erlbaum.
Baron J. (1992). The effect of normative beliefs on anticipated emotions. Manuscript, Department of Psychology, University of Pennsylvania.
Baron, J. & Ritov, I. (1992). Intuitions about penalties and compensation in the context of tort law. Manuscript, Department of Psychology, University of Pennsylvania.
Baron, J. (in press a). Morality and rational choice. Dordrecht: Kluwer.
Baron, J. (in press b). Heuristics and biases in equity judgments: a utilitarian approach. In B. A. Mellers and J. Baron (Eds.), Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.
Batson, C. D. (1975). Rational processing or rationalization? The effect of disconfirming evidence on a stated religious belief. Journal of Personality and Social Psychology, 32, 176-184.
Bloom, B. S. (Ed.) (1956). Taxonomy of educational objectives, Handbook I: Cognitive domain. New York: David McKay.
Brown, R. G. (1991). Schools of thought: How the politics of literacy shape thinking in the classroom. San Francisco: Jossey Bass.
Bryan, W. L., & Harter, N. (1899). Studies on the telegraphic language. Psychological Review, 6, 345-375.
Ceci, S. J., & Liker, J. K. (1986). A day at the races: A study of IQ, expertise, and cognitive complexity. Journal of Experimental Psychology: General, 115, 255-266.
Charness, N. (1981). Search in chess: Age and skill differences. Journal of Experimental Psychology: Human Perception and Performance, 7, 467-476.
Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics knowledge by experts and novices. Cognitive Science, 5, 121-152.
Churchill, W. Sir (1956-8). A history of the English-speaking peoples. New York: Dodd-Mead.
de Bono, E. (1976). Teaching thinking. London: Penguin.
Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. Boston: Heath.
Frey, D. (1986). Recent research on selective exposure to information. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19, pp. 41-80). New York: Academic Press.
Ganzach, Y., & Krantz, D. H. (1991). The psychology of moderate prediction. II. Leniency and uncertainty. Organizational Behavior and Human Decision Processes, 48, 169-192.
Herrnstein, R. J., Nickerson, R. S., de Snchez, M., & Swets, J. A. (1986). Teaching thinking skills. American Psychologist, 41, 1279-1289.
Hoch, S. J. (1985). Counterfactual reasoning and accuracy in predicting personal events. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 719-731.
Hogben, L. T. (1951). Mathematics for the million (3rd edition). New York: Norton.
Horton, R. (1967). African traditional thought and Western science (pts. 1-2). Africa, 37, 50-71, 155-187.
Horwich, P. (1982). Probability and evidence. Cambridge University Press.
Janis, I. L. (1982). Groupthink: Psychological studies of policy decisions and fiascos (Revised edition of Victims of groupthink: A psychological study of foreign-policy decisions and fiascos 1972). Boston: Houghton-Mifflin.
Janis, I. L. (1989). Crucial decisions: Leadership in policymaking and crisis management. New York: Free Press.
Jervis, R. (1976). Perception and misperception in international politics. Princeton: Princeton University Press.
Klemp, G. O., Jr., & McClelland, D. C. (1986). What characterizes intelligent functioning among senior managers? In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence in the everyday world (pp. 31-50). New York: Cambridge University Press.
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107-118.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498.
Lakatos, I. (1978). Falsification and the methodology of scientific research programmes. In I. Lakatos, The methodology of scientific research programmes (J. Worrall & G. Currie, Eds.) (pp. 8-101). Cambridge University Press.
Langer, E. J. (1989). Minding matters: The consequences of mindlessness-mindfulness. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 22, pp. 137-173). New York: Academic Press.
Larrick, R. P., Morgan, J. N., & Nisbett, R. E. (1990). Teaching the use of cost-benefit reasoning in everyday life. Psychological Science, 1, 362-370.
Lempert, M. (1990). When the problem is not the question and the solution is not the answer: Mathematical knowing and teaching. American Educational Research Journal, 27, 29-63.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098-2109.
Luchins, A. (1942). Mechanization in problem solving: The effect of Einstellung. Psychological Monographs, 54 (6, Whole No. 248).
Machado, L. A. (1980). The right to be intelligent (M. C. Wheeler, Trans.). Oxford: Pergamon.
Margolis, H. (1987). Patterns, thinking and cognition: A theory of judgment. Chicago: University of Chicago Press.
Nickerson, R.S. (1989). On improving thinking through instruction. Review of Research in Education 15, 3-57.
Nickerson, R. S., Perkins, D. N., & Smith, E. E. (1985). The teaching of thinking. Hillsdale, NJ: Erlbaum.
Nisbett, R. E., Fong, G. T., Lehman, D. R., and Cheng, P. W. (1987). Teaching reasoning. Science, 238, 625-631.
Paul, R. W. (1984). Critical thinking: Fundamental for education for a free society. Educational Leadership, 42, (September) 4-14.
Perkins, D. N. (1981). The mind's best work. Cambridge, MA: Harvard University Press.
Perkins, D. N. (1986). Knowledge as design: Critical and creative thinking for teachers and learners. Hillsdale, NJ: Erlbaum.
Perkins, D. N. (in press). Reasoning as it is and could be: An empirical perspective. In D. Topping, D. Crowell, & V. Kobayashi (Eds.), Thinking: The third international conference. Hillsdale, NJ: Erlbaum.
Perkins, D. N., Lochhead, J., & Bishop, J. (1987). Thinking: The second international conference. Hillsdale, NJ: Erlbaum.
Popper, K. R. (1962). Conjectures and refutations: The growth of scientific knowledge. New York: Basic Books.
Presseissen, B. Z. (1986). Critical thinking and thinking skills: State of the art definitions and practice in public schools. Paper presented at annual meeting, American Educational Research Association, San Francisco. Philadelphia: Research for Better Schools, Inc.
Putnam, H. (1981). Reason, truth and history. Cambridge University Press.
Reason, J. T. (1990). Human error. New York: Cambridge University Press.
Roberts, L. (1992). Science in court: A culture clash. Science, 257, 732-736.
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82, 498-504.
Schrag, F. (1988). Thinking in school and society. New York: Routledge.
Slovic, P., & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3, 544-551.
Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105.
Suedfeld, P., & Tetlock, P. E. (1977). Integrative complexity of communications in international crises. Journal of Conflict Resolution, 21, 169-184.
Swartz, R. J. (1987). Teaching for thinking: A developmental model for the infusion of thinking skills into mainstream instruction. In J. B. Baron & R. J. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 106-126). New York: W. H. Freeman.
Sweller, J., Mawer, R. F., & Ward, M. R. (1983). Development of expertise in mathematical problem solving. Journal of Experimental Psychology: General, 112, 639-661.
Tetlock, P. E. (1983). Cognitive style and political ideology. Journal of Personality and Social Psychology, 45, 118-126.
Tetlock, P. E. (1984). Cognitive style and political belief systems in the British House of Commons. Journal of Personality and Social Psychology, 46, 365-375.
Tetlock, P. E. (1985). A value pluralism model of ideological reasoning. Journal of Personality and Social Psychology, 50, 819-827.
Thorndike, E. L., & Woodworth, R. R. (1901). The influence of improvement in one mental function upon the efficiency of other functions. Psychological Review, 8, 247-261.
Toulmin, S. E. (1958). The uses of argument. Cambridge University Press.
Voss, J. F., Tyler, S. W., & Yengo, L. A. (1983). Individual differences in the solving of social science problems. In R. F. Dillon & R. R. Schmeck (Eds.), Individual differences in cognition (Vol. 1, pp. 205-232). New York: Academic Press.
Wertheimer, M. (1959). Productive thinking (rev. ed.). New York: Harper & Row (Original work published 1945)
Zapf, D., Brodbeck, F. C., Frese, M., Peters, H. & Prmper, J. (1992). Errors in working with office computers: A first validation of a taxonomy for observed errors in a field setting. Manuscript, Fachbereich Psychologie, Justus-Liebig-Universität Giessen.



File translated from TEX by TTH, version 3.59.
On 22 Jun 2004, 10:57.