The symmetry principle has been a central tenet of the history of science since at least the 1970s, and in my view it is a sound and valuable principle. However it is often confused with principles that are neither sound nor valuable, some of which are positively harmful for the study of past science. For example, the symmetry principle is sometimes expressed as the view that “truth” cannot explain the beliefs of past scientists. My main aim in this series so far has been to show that this view is hopelessly vague, and that on many readings it is false. In this post I will say the same about another aspect of the symmetry principle, namely the claim that historians should explain true and false beliefs “in the same way.” I’ll run through five readings of this claim, only the first of which deserves to be called the symmetry principle.
Truth/falsity is not good evidence for good/bad reasons (Symmetry Principle)
To use an example from my first post in this series, Galileo had a true theory of the moon and a false theory of comets. It is tempting to infer that he reached the moon theory through observation, experiment, and careful reasoning, whereas he reached the comet theory through arrogance, spite, and self-interest. That is, it is tempting to infer from the truth of the moon theory that Galileo held it for good reasons, and to infer from the falsity of the comet theory that he held it for bad reasons.
Most historians of science, including me, think that both of these inferences are flawed. In general, the truth-value of a belief is not a good guide to the motivations of the believer. The inference from the former to the latter is what I call The Fallacy. Let’s not worry about why it is a fallacy, a question that I intend to answer in a later post. The important point is that it is a fallacy, and that the symmetry principle properly understood is simply the assertion that this is so. Hence I will call this assertion the Symmetry Principle (note the capital letters).
Some might object to the Symmetry Principle on the grounds that there is no principled way of distinguishing good reasons from bad reasons. In practice, however, historians of science seem perfectly capable of making this distinction. The evidence for this is that they often criticise other historians for violating the symmetry principle, and to do this they must be to make the distinction in question. This is true no matter which version of the symmetry principle the critics have in mind, since all of the versions that I know of rely on some variant of the distinction between good and bad reasons.
The Symmetry Principle must be distinguished from the following four claims. All of these claims have masqueraded as symmetry principles at one time or another. All of them look like the Symmetry Principle on the surface, but none of them follow from the Symmetry Principle and none of them should be called symmetry principles, either because they are false or because they have more to do with asymmetry than symmetry.
Truth/falsity is never aligned with good/bad reasons (Exclusion Principle)
The Symmetry Principle is quite a modest, minimalist principle. All it does is ban a certain inference, namely from truth-values to kind-of-reason. Banning an inference is not the same as saying that the conclusions of that inference are always false. Historians, just like scientists, can be right for the wrong reasons. Fallacious arguments can have true conclusions. For example: dead people cannot walk; Socrates cannot walk; therefore Socrates is dead. Or: my nicotine patch company will make more money if smoking causes lung cancer; therefore smoking causes lung cancer.
Yet historians often interpret the symmetry principle to mean that truth-value is never aligned with type-of-reason. On this view, it is always an error to claim, of some true theory T, that a scientist believed T for good reasons and that his critics were mislead by their political interests or religious preferences. On this view, Galileo could not possibly have had good reasons for believing his moon theory but bad reasons for believing his comet theory. That option is ruled out a priori (hence my name for this principle, the Exclusion Principle). Galileo must have either had a mixture of reasons for believing both theories, or only bad reasons for believing both, or good reasons for the comet theory and bad reasons for the moon theory.
Unlike the Symmetry Principle, the Exclusion Principle is hard to defend. Granted, there is a sense in which something like bad reasons are a precondition of all scientific activity. Social relations, for example, are sometimes associated with “bad reasons” for belief, yet it is impossible to do science worthy of the name without forming social relations of some kind.
Here it is important to distinguish preconditions of belief from factors that shape belief, a distinction I borrow from Steven Shapin (although he might not agree with the application I am now making of the distinction). It is a precondition of scientific activity that someone pays for it (for example). But it does not follow that funding source always shapes belief, ie. that a scientist’s funding source is always a good explanation of why they believe theory X rather than a rival theory.
It is also important to distinguish between absolute claims and relative claims. Probably there are no scientific beliefs that are pure in the sense that no bad reasons enter into their causal history. However it is very plausible that good reasons sometimes dominate over bad reasons. And there is no law of history that says that the good reasons never fall on the side of the true theories.
Historians should never omit the good/bad reasons for false/true theories (Completeness Principle)
The Symmetry Principle and the Exclusion Principle are claims about how historians reason about past science. By contrast, the third principle on my list is about what they should include in their books and articles. The Completeness Principle—the name is my own—states that whenever a historian discusses a true belief he must discuss the bad reasons the scientist had for holding that belief, and not just the good reasons. Conversely, discussions of false theories are lamentably incomplete if they omit good reasons in favour of bad ones. In other words, it is not enough for a historian to believe that truth/falsity is rarely aligned with good/bad reasons. He must also structure his narratives around this belief.
It seems to me that the Completeness Principle is a bad idea, and it would be a bad idea even if the Exclusion Principle were true. The worry that lies behind the Completeness Principle is that books and articles might give misleadingly one-sided accounts of past science. There is something in this worry—after all, what’s the point of believing the Symmetry Principle if that belief is never reflected in our written work? Nevertheless, this worry should not lead to a blanket ban on narratives that are one-sided in the way I have just described.
Such a ban would be like a ban on narratives that consider French science but exclude German science, or a ban on those that discuss botany but not mineralogy. When we read books that only cover French cases, or that only cover botany, the reasonable response is usually to assume that the author has focused on these cases out of convenience, or because that is what they know. We do not usually assume that the author denies the importance of German science, or of mineralogy.
Likewise, if an author writes a book about Lavoisier’s biochemistry (for example) and says nothing about the French Revolution, we should not assume that the author denies the relevance of the French Revolution to Lavoisier’s biochemistry. Perhaps the French Revolution simply has little bearing on the argument the author is trying to make about Lavoisier’s biochemistry. Or perhaps the author omitted social and political factors in order to zoom in on Lavoisier’s attitude towards measurement.
Historians should systematically omit good reasons in order to focus on the others (Methodological Relativism)
Whereas the Completeness Principle says that historians should report good and bad reasons equally, this principle licenses them to omit good reasons. According to this principle, it is sometimes valuable for historians of science to ignore the experiments that scientists performed, the data they collected, or the chains of reasoning they articulated in order to defend one theory against another. These omissions are valuable, I take it, because they allow the historian to hone in on the social and political reasons for the beliefs that past scientists endorsed.
I have no objection to Methodological Relativism as an occasional heuristic. Just as some authors may want to ignore the French Revolution in favour of Lavoisier’s experimental practice, others may want to do the reverse. However I agree with Will Thomas that Methodological Relativism is a recipe for disaster if it is read as a general rule of historical method. And this is how it tends to be read, in my experience, especially when it is conflated with the Symmetry Principle, which is indeed a general rule of historical method.
Even if Methodological Relativism were a viable general principle, it would be misleading to call it a symmetry principle. It may be symmetric with respect to true and false theories, treating both in the same way, but it is Assymmetric with respect to good and bad reasons, since it focuses on the latter at the expense of the former.
Bad reasons are the dominant causes of both true and false beliefs (Social Constructivism)
This claim, like the previous one, is asymmetric with respect to good and bad reasons, since it gives priority to bad reasons. Unlike the previous claim, it does not just say that our narratives of past science should be written asymmetrically. Instead it says that that past science really was asymmetric. This claim accepts that good reasons were at work in past science, but insists that they were less powerful or less decisive or less fundamental than the bad reasons.
I hesitate to include this claim on my list, since people do not usually call it a symmetry principle and because the phrase “social constructivism” is such a fraught one. I include the claim anyway because it looks a bit like some of the other claims on this list, and because readers might wonder where it fits into this survey. I’ve plumped for “social constructivism” as a label for this doctrine because, for all the ambiguity of the name, it is usually applied to the claim that one kind of cause (whether we call these “social factors” or “bad reasons” or something similar) makes a larger contribution to scientific beliefs than another kind of cause (“epistemic factors” or “good reasons” or something similar) .
Another reason to mention Social Constructivism here is that it is often the end of a chain of reasoning that begins with the Symmetry Principle. For example, a common gambit is to observe that there are good reasons on both sides of many scientific debates, and to infer from this that good reasons cannot explain why people take the sides they do. After all, the argument goes, how can a commonality explain a difference? Needless to say, I do not endorse this argument or any of the others that lead from a laudable symmetry to a paralysing asymmetry.
To describe these arguments would require another post, and this series is already too long for that. I’ll simply conclude that there are several different claims that have been called symmetry principles in the last few decades, that several of these claims are not worthy of that name, and that we should not confuse those imposters with what I have called the Symmetry Principle. In case you missed it, the Symmetry Principle is made up of the twin claims that the truth of a belief is not good evidence that the believer had good reasons for holding it, and that the falsity of a belief is not good evidence that the believer had bad reasons for holding it.
The burden of my next post is to say why the Symmetry Principle is a good principle. This task is harder than it might seem, as we shall see.
Expand post.
Malheureusement je ne traduis plus tous les posts que j'écrit sur ce blog. Si vous voudriez lire les posts qui ont été traduits, veuillez cliquer ici.
Agrandir ce message.
Welcome back, Michael - I think this is a very useful delineation of historiographical strategies. I have a couple of thoughts:
ReplyDelete1) I think your average relativist would reject, as a methodological tenet, the possibility of delineating "good" reasons, since goodness would be taken to be a contested category. Thus the methodological relativist would not omit good reasons, so much as focus in on those places where goodness was explicitly contested. This could occur at the level of the quality of instrument or technical argument, or the integrity of the experimenter, or the entire research program she or he represents. Although such relativism is intended to flesh out various hidden preconditions for agreement, you are right that it does tend to exclude reasons that are not contested, either because they occur at a deep level of argument that is excluded because criticism is leveled at a more fundamental/programmatic level; or because there is broad agreement about the validity of the reason.
Otto Sibum's work on Joule's experimental technique is an excellent example of work that detects reasons that would not show up in a relativist account because they were not explicitly contested.
2) I think it is important to recognize that the legitimacy of asymmetric approaches parading under the banner of symmetry (and the preponderance of "bad" reasons in such approaches) is often premised on the idea that they are intended to be read against a "received" or "scientists'" view, or an "official history," all supposedly dominated by "good" reasons (probably defined in a presentist frame). See my posts on "Kuhn's Demon" and Malcolm Ashmore's radically corrective account of the N-rays episode.
Hi Will, thanks for replying with your usual alacrity.
Delete1) I hesitated a lot before deciding to use the terms "good" and "bad" rather than the more common "social" and "cognitive." This is not just a terminological matter, since there can be bad cognitive reasons and good social reasons for believing something. Examples of the former can found in the first paragraph on the Exclusion Principle in the post; an example of the latter might be to believe a theory in climate science because most people with PhDs in climate science endorse the theory.
The reason I chose the good/bad distinction is because I believe it is the distinction that people have in mind when they commit The Fallacy. Why are tempted to believe that Galileo's moon theory was caused by observations and experiments rather than by self-interest? Is it because we consider self-interest to be a social matter rather than a cognitive matter? Or is it because we consider self-interest to be a bad reason for holding a belief, ie. a reason that is going to lead to lots of false beliefs in the long run? Speaking for myself, I think the latter is the deeper answer. Self-interest is indeed a social matter, but it is not because it is social that we are tempted to commit The Fallacy, but rather because it is a bad way to form beliefs.
As you point out, the problem with the using the terms "good" and "bad" is that they wear their normativity on their sleeve. However I don't think this should prevent the relativist from endorsing the Symmetry Principle as I have stated it. The relativist could just read "good" as "good relative to the 21st century," or "good relative to the historian in question."
You are right that the methodological relativist would not entirely exclude good reasons from their account. However the usual move is to identify reasons that are good (relative to us) on both sides of a past debate, and then use this observation to justify a social explanation of the outcome of the debate. So good reasons are present, but they "dominate" the bad reasons in the explanations that the methodological relativist gives.
I should add that I have been using the phrases "good reasons" and "bad reasons" as short-hand for the phrases "good epistemic reasons" and "bad epistemic reasons." If you want to hold true beliefs, then self-interest is a bad reason to hold a belief. But if you want to make money or gain power, then self-interest may be a good reason to hold certain beliefs. My terms are not meant to imply that the pursuit of truth is morally superior to the pursuit of self-interest.
2) Agreed: one route to asymmetry is the desire to correct the opposite asymmetry and thereby restore symmetry overall. This is a good route to follow, I think, as long it is clear how much of the old account is being rejected in the revision. The problem is that often this is far from clear, as you discussed here: http://etherwave.wordpress.com/2011/03/20/shapiro-vs-schaffer-on-newtons-prism-experiments-pt-1/
DeleteBut wait: in the above post I implied that charitable reading, not clear writing, was key. Were we too hard on Schaffer's account of the reception of Newton's prism experiments? Should we be more charitable, and assume that he was trying to complement the received account of those experiments rather than replace it? More generally, when should the author specify the scope of their arguments and when can they reasonably leave this task to their readers?
I don't have a complete answer to these question, but I think two factors are important: the writer's purpose and the reader's interests.
If the main point of your book/article is to draw attention to factor X, then it would be natural to say something about the relative importance of factors X, Y and Z (if only to say that their relative importance is hard to guage).
Suppose that highlighting factor X is not the main point of the work, but that the work happens to discuss factor X. Suppose also that the question of the relative importance of X, Y and Z is a controversial issue that is high on the discipline's agenda at the time of writing, such that most readers will be interested in what the author has to say on the topic. In such a case it would also be natural for the author to say something about it (if only to say that he considers the topic an over-hyped one about which he has nothing new to say).
I'ld have to read Schaffer's paper again to work out whether he falls foul to one of these requirements. But it's more important to state these requirements (supposing that they are reasonable requirements) that to work out whether this or that author violates them in this or that paper.
Thanks for your replies, Michael. Your response to my (2) reminds me again of my feeling that historiographical problems often cast as turning on deep epistemological problems often can be recast as meat-and-potatoes problems in the craft of history-writing: how do you engage with an existing literature, for example. While it is tempting to blame confusion concerning an author's intent on bad writing, I tend to think that maintaining plausible deniability plays an unfortunately large role in legitimizing radical-sounding claims and a historiography whose chief accomplishments are usually framed only in the vaguest terms. Which is too bad, because as we agree, the historiography has accomplished numerous useful things.
DeleteWe've been over these points before, I believe, but it's always nice to rehash how they call connect.
Ahem, "all" connect.
DeleteA loose note: the feeling you describe in your second sentence is one that many historians are having about their subject matter, ie. we feel that "scientific debates often cast as turning on deep epistemological problems often can be recast as meat-and-potatoes problems in the craft of science." A cause for optimism, perhaps? After all, those who appreciate the point about past science should be able to appreciate your point about recent historiography.
DeleteSpotting the typo in the last paragraph in my last post is an exercise that I will leave to the reader.
This comment has been removed by the author.
ReplyDeleteIn keeping with the multipart nature of the series, this is the first of my four comments.
ReplyDeleteSince it's been awhile, I just reread the whole series (comments included). It all started with an interview that Simon Schaffer gave CBC. In this comment, I just want to transcribe a bit of what he said.
Knowledge is an institution, and it should be analyzed as such. .... That meant, for example, that it was extremely unpromising, to put it mildly, to suppose that social principles are only acting when folks get things wrong. So for example, it didn’t look remotely plausible to say that Isaac Newton thought that there was an inverse square law of gravity acting instantly at a distance through empty space between the centers of distant bodies because there is an inverse square law acting instantly from the center of one body to another through empty space, and Leibniz disagreed because he was German.
That's to explain the truth one way, that's to say to explain what we think is so one way, and to explain what we don't think is so a completely different way. As though there are these things called social forces which wreck our ability to see how things are. What we learned was that there are social institutions at work to produce what we know. And indeed to produce what anybody claims to know, at any particular period. ... It seemed to us, and it still seems to me, that people in social groups build their knowledge like they build other institutions, and you should analyze how they do that the way you analyze the the institutions people build. ... That meant therefore that it would be a great idea, a really really good idea, to look at controversies both in the present and in the past. ...To use a phrase from I think one of the most important sociologists of scientific knowledge, Harry Collins, you could see how the ship gets in the bottle.
Note the seamless transition from a version of the symmetry principle to social constructivism. Also note Schaffer's example: he accuses an unnamed (or generic) historian of claiming that Newton believed his law of gravity because it was true.
Finally, our frequent participant Jim Grozier has an interesting post up on Harry Collins: Harry Collins and Tacit Knowledge.
Complete Schaffer interview: How To Think About Science, Part 1.
This comment has been removed by the author.
ReplyDeleteThe issue of Relativism vs. Truth reared its head earlier in this series, and you proposed a way to avoid its seductive but dangerous entanglements: talk about the reasons people believed things: evidence vs. social and psychological factors.
ReplyDeleteI paraphrase, and I wonder if some standard terminology holds sway in the field. Cognitive or epistemic reasons? Internal vs. external? In this post, at least, you adopt the dichotomy Good vs. Bad. Obviously confusion could run rampant, and careful distinctions should be drawn (and quartered), much like you did for "fact".
To kick-start the dissection, let me ask a few questions.
Does evidence (or good reasons) include everything regarded as legitimate by the historical figures? I have in mind, specifically, scriptural and theological arguments. Galileo and his opponents traded them, as did Newton and Leibniz, and I believe (but am not sure) that they midwifed D'Alembert's principle of least action.
In contrast, we have such factors as currying favor, personal dislike, egotism, wishful thinking.
This suggests a touchstone for the distinction: "evidence" is anything explicitly mentioned by the participants. The records of Galileo affair, for example, nowhere include a notation, "Hey, the Counter Reformation is in full swing, we don't have time for his crap, squelch him."
I suspect this "touchstone" won't hold up under severe analysis, but watching it disintegrate could prove instructive.
A post by Thony (Gopnik, Galileo and Ed Yong: Galileo not admitting to being wrong) adds another wrinkle. Thony uses the words "bamboozled" and "deceptive" about Galileo's tidal theory, raising the question: did Galileo even believe it himself? Perhaps he just needed proof of heliocentricity, and thought his opponents were easy marks. Should we count as "evidence" all arguments the protagonists made, even if we suspect they were arguing in bad faith?
Thus, my preliminary classifications of reasons:
1. Arguments made explicitly at the time, regarded as legitimate today (e.g., Newtonian predictions of celestial observations). Subcategories: good faith, bad faith.
2. Arguments made explicitly at the time, no longer regarded as legitimate in the scientific community (e.g., scriptural evidence). Subcategories: good faith, bad faith.
3. Social factors, not explicitly mentioned, that may have consciously or unconsciously influenced a protagonist (e.g., desire for a job).
4. Psychological factors, not explicitly mentioned, that may have consciously or unconsciously influenced a protagonist (e.g., personal dislike).
This comment has been removed by the author.
ReplyDeleteWeird thing with the commenting system today, all my posts seem to be doubled...
ReplyDeleteArggh! At the end of your post, you wrote: " In case you missed it...", and whaddya know, I missed it! And wrote a whole comment based on that. Still, I think some of what I wrote might still contribute to the discussion, so herewith the corrected comment.
Somewhere about the midpoint of this series, I asked for a definition of the Symmetry Principle. We've seen a few hints of one, along the way:
1. Schaffer's interview offers a precis of The Fallacy: "That's to explain the truth one way, that's to say to explain what we think is so one way, and to explain what we don't think is so a completely different way", and a prescription for its cure: "it still seems to me, that people in social groups build their knowledge like they build other institutions, and you should analyze how they do that the way you analyze the institutions people build."
2. I offered this formulation (called symm-2): "In examining any historical debate, we should ignore present-day views on which side was right, and apply equal standards to the evidence and motivations we ascribe to each side."
3. In "How (not) to bring STS to the masses", you suggested this version: "In general, the truth-value of a past scientist's belief is a poor guide to the reasons they had for holding that belief."
4. We have a subheading in this post: "Truth/falsity is not good evidence for good/bad reasons (Symmetry Principle)".
5. Under the subheading (4), we have an echo of (3): "the truth-value of a belief is not a good guide to the motivations of the believer." Removing the word "not" then produces The Fallacy, and the recognition that The Fallacy is fallacious constitutes the Symmetry Principle.
6. And finally, the Official Definition: "the twin claims that the truth of a belief is not good evidence that the believer had bad reasons for holding it, and that the falsity of a belief is not good evidence that the believer had good reasons for holding it."
I have to wonder if you really meant to write (6) as it stands. Perhaps "good" and "bad" should be interchanged somewhere? (3)-(5) all urge us to guard against ascribing only good reasons to "correct" theories and only bad reasons to "incorrect" theories. (6) seems to do the opposite.
Hi Michael,
ReplyDeleteThanks for these probing replies. And thanks for curating your madly replicating posts (I have no idea what went wrong, but experience shows that this sort of glitch tends to vanish as abruptly as it appears).
1) The Schaffer quote makes a good endnote to this post. I don't want to put too much weight on the interview, since I am squeamish about holding anyone to what they say in an off-the-cuff answer to a journalist. Still, as per my first post in this series (http://bit.ly/1uN9QDL), I do not think any historians of science have committed exactly the error that Schaffer seems to attribute to them in the first paragraph you quote.
The second paragraph is another matter. Is Schaffer saying that social institutions are more important than cognitive factors in explaining how controversies pan out? Or is he just saying that these two kinds of explanation are on a par? And is he talking about social institutions as preconditions of scientists holding any theories at all, or rather as forces that tip the balance in favour of theory X rather than its rival (as per the distinction I mentioned under the Exclusion Principle in the above post)? Without an answer to those questions, I don't know whether the paragraph is an example of Social Constructivism or not.
2) Yes, the "good/bad reasons" distinction opens up a can of worms. My defence is that anyone who endorses the symmetry principle--in whatever form--has to deal with the same writhing mess of complications. And I don't know any historians of science who completely reject the symmetry principle. So we're all in the same boat (or can).
Having said that, your classification of reasons is a big help. No, there is not really a standard terminology in the field; but the good/bad distinction is truly idiosyncratic, and I would not use it if I did not think it were tied to the error that the Symmetry Principle protects us against. By that criterion, only type 1 would qualify as "good" reasons; and of those, only the arguments made in good faith. Only these reasons are such that we are tempted to invoke them to explain true beliefs. Why? Because we feel that they are the kind of reasons that will generate many true beliefs in the long run.
3) You're right, I shot myself in the foot with the Official Definition. I've changed the text in the post so that it says what I mean, rather than the opposite of what I mean.
You may have sensed my indecision over the words "guide" and "evidence." I began with the former, but now I prefer the latter. The reason is that I want to leave open the possibility that truth-value is, in fact, a good guide to motivation. That is, it may be that there is a strong historical association between true beliefs and good reasons, and between false beliefs and bad reasons. Even if this is so, however, we do not *know* that it is so. And as long as we do not know, we cannot use truth as evidence for good reasons, or falsity as evidence for bad reasons. So the advantage of using "evidence" rather than "guide" is that it is strong enough to make the key point but no stronger.
Thanks for your reply.
DeleteGood/bad reasons: I confess to a touch of uneasiness at drawing the line between (1) and (2), because of the whiff of presentism. Perhaps you can elaborate in your next post.
What Schaffer meant: My knowledge of Schaffer's work comes mostly from Ether Wave Propaganda, so I'll wait for Will to weigh in.
On good and bad reasons: yes, it would be presentist to express judgements like (1) in a historical book or article. But the point of the Symmetry Principle is precisely that such judgements *shouldn't* inform our books and articles. Or at least, they should not guide our decisions about what kind of explanation we apply to past beliefs. (I have argued in earlier posts in this series that those judgements can legitimately enter our historical explanations in other ways).
Delete