Disagreement

Interview by Richard Marshall

'Many disagreements—certainly among philosophers—involve people who seem equally well-informed, equally intelligent, equally hard-working and serious, and so on. In other words, we disagree with people who seem just as well-placed as we are to form accurate beliefs on the relevant topic. Assuming we’re disagreeing about a matter that has right and wrong answers, one of us has gotten it wrong. So the question then arises: is it rational for me to believe that it’s the other person who’s wrong?'

'The fundamental motivation behind Conciliationism is not about casting doubt on my rationality. I think it’s about casting doubt on my reliability.'

'I think that one of the most interesting things about disagreement is that it’s an example of what has come to be called “higher-order evidence”. The term has been understood in several incompatible ways, but I’d now characterize it as evidence that bears on my attitude toward some claim viabearing on the reliability of my thinking about that claim.'

'What is the rational doxastic reaction, say, to getting excellent evidence that I’ve ingested a drug that’s been shown to make people have wildly inaccurate beliefs about everything, including beliefs about the best way to avoid drug-caused inaccurate beliefs? It beats me.'

'I can honestly say that I am significantly less confident in Conciliationism than I would be if all the excellent epistemologists endorsed it. But—just between you and me—I suspect that I’m a bit more confident in Conciliationism than Conciliationism says I should be. Don’t tell anyone!'

David Christensen's main research interests are in epistemology (formal and informal). Here he discusses the epistemology of disagreement, Steadfasters vs Conciliationists, whether disagreement is evidence, conciliationism and uniqueness, epistemic peers and credentials, formulating the independence principle, whether independence is possible, epistemic modesty, blindspots, self-undermining, akrasia, Deductive Purism, real world disputes and JS Mill, and Bayesian agents.

3:16:  What made you become a philosopher?

David Christensen: I majored in philosophy at Hampshire College just because I found some of the philosophy courses so much more exciting than anything else I studied. Near the end, I went to my advisor and said, “Well, I guess I should start figuring out which law schools to apply to.” I think he must have been one of those people who hate lawyers, because he told me “You don’t want to be a lawyer! You should either go to graduate school in philosophy, or go to business school.”

For reasons I still don’t understand, I just believed him. So I went to the college library, and checked out catalogues for business schools and philosophy grad schools. The business school courses looked deadly to me, so philosophy grad school it was…

3:16: You’re interested in the epistemology of disagreement. To begin with could you sketch for us the issues involved in this and, when talking about belief-revision, what conciliatorists and steadfasters say?

DC: The questions start with a simple observation: that most of us place high confidence in opinions that are denied by other people. Of course, if I disagree with a kindergartener, or even an adult who seems clearly less well-informed than I am, that’s not a big deal. But many disagreements—certainly among philosophers—involve people who seem equally well-informed, equally intelligent, equally hard-working and serious, and so on. In other words, we disagree with people who seem just as well-placed as we are to form accurate beliefs on the relevant topic. Assuming we’re disagreeing about a matter that has right and wrong answers, one of us has gotten it wrong. So the question then arises: is it rational for me to believe that it’s the other person who’s wrong? Conciliationists tend to say, in many such cases, that the answer is “no”, so I’m rationally required to revise my belief (adopting a lower credence, or perhaps suspending belief). Steadfasters tend to say, in many such cases, that I can maintain my confidence.

Of course, this is vastly simplified. There are lots of complications, and there’s really a spectrum of views ranging from very Conciliatory to very Steadfast. But I think that the key thing that separates different positions on the spectrum is the attitude they take to one fundamental question: Is it rational for me to dismiss disagreement on the basis not of general facts about the other person (e.g., they’re a kindergartener), but on the basis of the reasoning behind my original opinion on the disputed matter? Conciliationists tend to say “no”. Steadfasters tend to say (at least in the case where my original reasoning was good) “yes”.

And just to lay my cards on the table, I’m a Conciliationist.

3:16:  Does disagreement count as evidence? And what’s this got to do with ‘higher-order evidence?

DC: Well, I think it does count as evidence. But I have to admit that in some cases, it’s a weird kind of evidence. So, think of a case where I begin with some empirical premises and reason my way deductively to a conclusion C. Then my colleague Josh, who’s much better at logic than I am, tells me that while he agrees with my premises, he rejects C. I think I should lose confidence in C. So I want to say that Josh’s opinion is evidence about C, in the sense that it’s information that bears on how confident I may rationally be in C.

But one might protest: how can some fact about this guy’s psychology have any bearing on whether my premises entail C? If C really does follow from my premises, isn’t Josh’s opinion just irrelevant? And there’s surely something right about this protest. So I think that one of the most interesting things about disagreement is that it’s an example of what has come to be called “higher-order evidence”. The term has been understood in several incompatible ways, but I’d now characterize it as evidence that bears on my attitude toward some claim viabearing on the reliability of my thinking about that claim. It’s a weird, indirect sort of evidence.

3:16:  How close is the relationship between conciliationism and uniqueness principles?

DC: This is something I’ve completely changed my mind on. I used to think they were intimately related, but I now think they’re pretty much separate issues.

Here’s why some people (including my former self) have thought they are connected: Uniqueness principles say that there’s only one maximally rational response to a given batch of total evidence; permissivism denies uniqueness. If uniqueness is true, then the disagreement of someone who shares my evidence means that one of us has not evaluated the evidence rationally. So if they seem just as likely to evaluate the evidence rationally as I am, their disagreement gives me reason to doubt that my original reaction to the evidence was rational. On the other hand, if permissivism is true, then their disagreement doesn’t give me as strong reason to doubt that my original reaction was rational—maybe they both were rational! Now, ifyou see Conciliationism as essentially motivated by the thought that disagreement casts doubt on the rationality of my original opinion, denying uniqueness would be a way of at least weakening the pressure to conciliate.

However: I now think that the fundamental motivation behind Conciliationism is not about casting doubt on my rationality. I think it’s about casting doubt on my reliability. We can see how these might come apart if we suppose that rationality is very permissive. Suppose my friend and I are meteorologists, equally educated, and with access to all the same data. I become confident that it’ll rain tomorrow, then I find out that she’s confident that it won’t. I’m wondering how to react, when the Epistemology Oracle appears unto me. (This may seem fanciful, but I learned of the Oracle’s existence from Roger White.) I ask her who’s interpreting the data right. She replies, “Well, you know I’m not the Weather Oracle, so I can’t tell you whether it’ll rain. But I can tell you this: as usual, both of your initial credences were perfectly rational responses to the evidence. As I tried to explain to you last time, rationality is quite permissive!” Now suppose that while my friend and I usually agree in our forecasts, on the past occasions where we’ve disagreed, she has just as good a track-record of weather prediction accuracy as I do—or even suppose that her record is much better. It seems clear that her disagreement gives me reason to be less confident that it’ll rain, despite wide permissivism being true, and despite the fact that I’m not worried at all about the rationality of my original forecast.

3:16:  So judging the epistemic credentials of the person or people you’re disagreeing with is going to be important here. How is one supposed to do this – how do you decide whether someone is your epistemic peer? Do you think there’s a need for an independent assessment of the other person’s credentials?

DC: This is a key issue. It needn’t center on peerhood, since there’s a whole spectrum of possible people one might disagree with, from kindergarteners to world-class experts. But assessing where someone is on that spectrum is going to be key. Clearly things like track-record, intelligence, intellectual honesty, familiarity with relevant evidence and arguments, and so on are relevant. In deciding how seriously to take disagreement on a particular occasion, situational factors are also relevant: Is the other person drunk? Am I? Is one of us particularly likely to be biased? All these things seem to bear on the likelihood of getting the right answer on the disputed topic.

But there’s one sort of situational factor that’s particularly vexed. Suppose my friend and I share the ordinary P-relevant evidence, but I believe P and my friend disagrees. Can I use the fact that the evidence actually supports P to support thinking that my friend is the one who made a mistake this time, even if the other factors (track record, intelligence, alcohol consumption, and so on) would not suggest that I’m more likely to be right?

This is where Conciliationists tend to insist on some form of Independence principle—roughly, a principle saying that my assessment of the relative likelihoods of my friend and me getting P right must be independent of my reasoning from my ordinary evidence to P. Steadfasters tend to reject that idea.

3:16:  Is it possible to formulate an independence principle?

DC: Honestly, I hope so, but I’m not sure—at least if we insist on a reasonably precise, general formulation. The one I just gave is terribly vague, and has other problems, too, not the least of which is that it only applies to disagreement cases. I think that the right independence principle has to be central to understanding the rational response to higher-order evidence in general.

Here’s an example: Suppose I’m flying an unpressurized plane, and have just reasoned, from the information on my instruments, to the conclusion that I have enough fuel to reach a certain airport a bit farther on than my original destination. Then I notice that my altimeter indicates that I’m at an altitude where there’s serious risk of hypoxia (oxygen-deprivation). The insidious thing about hypoxia is that is degrades complex thinking—like the thinking I did about having enough fuel—while leaving its victims feeling totally clear-headed. So let’s ask: would it be rational for me to rely on my reasoning about the fuel to put to rest my fear that hypoxia messed up that very thinking? I don’t think so. It would seem irrational for me to say to myself, “Well, given these instrument readings, I do have enough fuel. And that’s the conclusion I originally reached. So I guess hypoxia hasn’t interfered with my accuracy in thinking about this matter—there’s no reason to worry about my fuel levels!” But what’s wrong with that reasoning? After all, we can suppose that the instrument readings actually do support my original conclusion! I think that some sort of independence principle is needed here as well, to disallow a certain kind of question-begging reliance on a train of reasoning to dismiss evidence that I may be unreliable in that very reasoning.

That said, it turns out to be very difficult to make this idea precise. I’ve tried pretty hard, and I think I’ve made some progress, but I do not have a formulation that I am happy with. So I think that we do need an independence principle to really understand rational responses to self-doubt. But I guess that’s not much of an argument that a decent formulation is possible!

3:16:  If we really shouldn’t judge our own cases – hypoxia may be affecting me say – then what use is an independence principle no matter how well formulated? Somewhere along the reasoning line I’m going to have to trust myself – for example that I believe that I’m getting independent assessments bearing on my judgment – and so independence is impossible?

DC: You’re putting your finger on one of the hardest challenges for working out an independence-based theory of higher-order evidence. But I wouldn’t put it by saying we shouldn’t judge our own cases. I think we’re forced to act as judge in our own cases: rationality requires me to take into account evidence that my own thinking has been compromised. And unfortunately, I have to do that by using my own thinking—which is what makes the whole thing so interesting, and difficult. Independence principles come out of this predicament: they’re intended to describe a way of judging one’s own case without simply begging the question in one’s own favor.

But the really hard part of your question is this: what if I get evidence that hypoxia has not only completely messed up my detailed thinking about how the instrument-readings bear on whether I have enough fuel, but that it would also mess up any attempt I might make to cope with my own impairment (such as concluding, “Well, I might be hypoxic, so I’d better not trust those calculations I just did; I’ll stick to my original flight plan and not head for the further airport.”)? As you point out, I can’t assess my reliability in a way that’s independent of all of my thinking!

So, yes, I am going to have to trust my own reasoning. I think we are rational, by default, to trust our own reasoning. We can do this when we have no special reason to doubt our reliability. And we can trust some of our reasoning in order to assess the reliability of other reasoning, in response to evidence throwing “local” doubt on the reliability of that other reasoning. (So suppose we take hypoxia to threaten the reliability of complex fuel-level calculations, but not the reliability of the much simpler sort of thought described above leading to the conclusion that I ought to stick with my original flight plan. Or consider that my ability to understand the implications of my friend’s good meteorological track-record is not put in doubt by her disagreeing with me about whether it’ll rain tomorrow.) This is how independence principles get purchase.

But what about cases where higher-order evidence threatens all of my reasoning? Well, independence principles aren’t going to help us out there! I would point out that it’s not at all clear what result we’d even want our theory of rationality to deliver in this sort of case. What is the rational doxastic reaction, say, to getting excellent evidence that I’ve ingested a drug that’s been shown to make people have wildly inaccurate beliefs about everything, including beliefs about the best way to avoid drug-caused inaccurate beliefs? It beats me.

I take some comfort in that thought: maybe independence principles shouldn’t be expected to deliver solutions to paradoxical cases. However, I should also point out that “local” and “global” reasons for doubt are obviously on a spectrum. And dealing with this fact is poses one of the hardest obstacles to formulating a decently general and precise independence principle.

3:16: If I know something, why shouldn’t I hold on to believing it no matter what my peers think? Wouldn’t epistemic modesty erode knowledge, and why would that count as a rational epistemic strategy?

DC: Disagreement aside, it seems that in general, people should give up a belief when they get sufficiently strong new evidence against it. Sometimes, this will lead someone to give up a true belief. (The new evidence in such a case would be misleading. But misleading evidence is precisely evidence which leads rational believers away from the truth.) I think this applies just as much if the original belief constituted knowledge. So in general, misleading evidence can destroy knowledge; but this is not a reason to think that respecting one’s evidence is not a rational strategy. I’d say same when the evidence happens to be disagreement evidence.

Now it’s also true that following a policy of epistemic modesty—say, following Conciliationism—would erode belief in many cases. (I would not say that following Conciliationism would generally erode knowledge, since I’d say that even the Steadfaster, who maintains belief after learning of disagreement, no longer has knowledge.) But I think you’re right in seeing a real worry that practicing epistemic modesty leads to being less opinionated. When the lost opinions were accurate, this is unfortunate. Of course, when they were inaccurate, losing them is a good thing. (If I’m on a plane, I certainly want my pilot to exercise epistemic modesty in dealing with evidence of hypoxia. And I want my doctor to react modestly to disagreement of equally-experienced—or more-experienced—colleagues about what treatment will be good for me.)

But it might also be true that there’s something good—even from the point of view of discovering truths—in having a lot of opinionated enquirers. (Maybe they will be more motivated to do research, for example.) Insofar as that’s true, widespread adoption of Steadfast policies could have some epistemic advantages over adoption of modest policies. But I don’t think there’s anything incoherent in the suggestion certain policies that involve believing irrationally could be expected to lead, in an indirect way, to acquiring more true beliefs. (Suppose a psychologist tells me that if I believe I’m God’s Gift to Philosophy, that I’ll work much harder, write more papers, and make fewer errors in my work. It might be that if I could manage to get myself to believe this, I’d reap considerable epistemic benefits. But my belief about myself would be epistemically irrational for all that.)

3:16:  In your discussion about these issues with Roy Sorensen he raises the issue that there are epistemic blindspots where not everyone in a dispute can have equal access to the truth. In these cases there’ll be disagreement but it seems completely rational not to change ones belief even with a peer in such a situation doesn’t it? For example, you can know that I’m modest but I can’t (if knowing that about myself is immodest!). Doesn’t that disagreement between us show that rational disagreement can be tolerated and doesn’t automatically lead to downgrading beliefs?

DC: I do think that rational disagreement can sometimes be rationally tolerated without losing confidence. But if we think about peerhood in terms of expected accuracy, I don’t think that the blindspot example throws special doubt on Conciliationism.

Suppose we begin by supposing that I know you pretty well, and think you’re modest. You disagree. And let’s suppose that I rationally believe that people cannot tell when they themselves are modest, so that I have a kind of access to the truth about your modesty that you don’t have. In that case, I would not consider you and me to be peers on the question of whether you are modest. In general, where I have good reason to think that my friend has less good access to the truth, I need not conciliate (at least fully)—even if my friend’s poorer access to the truth does not impugn their rationality at all. So I don’t think I’d need to conciliate with you, even given Conciliationism.

The next question is this: Suppose you, too, rationally believe that I’m in a better position than you are to assess whether you’re modest. Would you need to conciliate with me—or even adopt my belief, since you acknowledge that I’m better-positioned to pronounce on the issue? Well, that’s not so easy, if modesty is a blindspot in the sense that it’s irrational for anyone to believe of themselves that they’re modest. If you don’t conciliate, you’re ignoring the beliefs of someone you acknowledge to be better positioned to get at the truth. If you adopt my belief on the basis of my expertise, then (assuming the blindspot view), you’re believing irrationally anyway. You’re bound to violate some rational ideal.

So why don’t I think that’s a special problem? Because I think that any view which takes higher-order evidence seriously will run into this kind of situation all the time. We’ve already seen a case where my conciliating with Josh required me to be less confident in the conclusion of a valid argument than I am in the conjunction of its premises. I think that, too, puts me in violation of a rational ideal. And failing to conciliate would put me in violation of a different ideal.

3:16: Presumably you believe in your solution to epistemic disagreement issues – but many of your peers working in the same domain disagree with you. On your own ground, shouldn’t you stop believing?

DC: Well, um, yes: given the excellent philosophers who reject Conciliationism, I think it’s clear that Conciliationism entails that I should not be highly confident in Conciliationism. This is a particular kind of “self-undermining”—roughly, Conciliationism forbids belief in itself in certain circumstances. And it does seem embarrassing for defenders of Conciliationism, at least on the surface.

I can honestly say that I am significantly less confident in Conciliationism than I would be if all the excellent epistemologists endorsed it. But—just between you and me—I suspect that I’m a bit more confident in Conciliationism than Conciliationism says I should be. Don’t tell anyone!

3:16:  Why do you argue that the sort of self-undermining that characterizes conciliatory views affects plausible epistemic principles and so isn’t a defect?

DC: I think that potential self-undermining comes with the territory of taking higher-order evidence seriously. If you think up any sensible principle that requires loss of confidence in situations involving higher-order evidence, there will be some possible situation where one gets higher-order evidence that, according to the principle, requires loss of confidence in the principle itself. So I think it’s implausible that the mere fact that a principle would self-undermine (in this particular sense) in some possible situations means the principle is false. For an example, consider a principle we might call Minimal Humility:

MH:If I’ve thought casually about P for a few minutes while high, and come to believe it, and I then find out that 100 people—people who are smarter and more familiar with the relevant evidence, and who don’t seem impaired in any way—have thought long and hard about P, and have unanimously and independently come to reject it, then I am not rational to keep believing P.

I think that MH is highly plausible. But it, too, will obviously self-undermine in certain circumstances. So the fact that a principle will self-undermine in some circumstances doesn’t entail that it is false.

Of course, that does not mean that the current disagreement about Conciliationism is not reason to doubt it—it absolutely is reason to doubt it. The point is just that the potentially self-undermining aspect of Conciliationism doesn’t render it false. And I’d also say that every philosophical position is such that expert disagreement about it is reason to doubt it. Being controversial is not a defectin a view, even if controversy does furnish a reason to doubt the view.

3:16:  Epistemic akrasia might be glossed as believing something while thinking that it’s not the epistemically rational thing for one to believe in one’s situation. Is that rational?

DC: I think that in most cases, akratic beliefs are irrational. That’s because in most cases, one is rational to expect irrational beliefs to be inaccurate—so evidence that my belief is irrational is typically also evidence that it’s inaccurate. But there are some cases where this connection between rationality and accuracy breaks down. In those cases, I think that akratic beliefs can be rational—in fact, they can be rationally required.

There are some detailed, precise examples worked out in the literature, but they’re pretty complicated. So here’s a straightforward example, closely based on one in a forthcoming paper by Zach Barnett; it involves someone who has powerful evidence for a false theory of rationality:

Jocko has taken courses at his college from epistemologists who, smitten with (their understanding of) Hume, espouse the following theory of rationality:

Deductive Purism: Inductive reasoning is not a rational way of supporting beliefs. So, for example, if one wonders whether the sun will rise tomorrow or not, or whether the next bread one eats will be nourishing or poisonous, it’s not rational to think either alternative more likely because of what’s happened in the past. Inductively-supported beliefs are certainly accurate by and large; but rationality is not just about accuracy—it’s about support by the right sorts of reasons. Only deductive reasoning can render rational support.

I take it that Jocko may well be rational in giving high credence to Deductive Purism, on the basis of his erudite, persuasive, and well-credentialed teachers’ lessons. He realizes that, according to Deductive Purism, it’s not rational for him to believe that the sandwich he’s brought for lunch will nourish him, rather than poisoning him. But he also understands that, even according to his teachers, inductively supported beliefs do tend to be true. So it seems to me that the most rational thing for him to believe is that his sandwich will nourish him, but that this belief of his is not rational.

I should emphasize, though, that this case is atypical. Suppose I believe that I have enough fuel to reach a more distant airport, or that my child is the most intelligent kid in her class, or that Miguel’s resume is just a bit better than Maria’s; and I then learn that it’s very likely that I have not reacted rationally to my ordinary evidence, due to hypoxia, or emotional ties, or sexism. And suppose I accept this, but maintain my original belief with undiminished confidence. In those cases, my akratic beliefs will be irrational. They’ll be irrational because when rationality is compromised by hypoxia, or emotional ties, or sexism, we should expect that to diminish the accuracy of the affected beliefs. That is what explains the absurdity of thinking, “Yeah, my belief that I have enough fuel is probably quite irrational, but, sure, I have enough fuel!”

3:16: Much of the current debate in philosophy tends to use simple toy examples of disagreement between two people. What difference does it make to apply a conciliatory approach to group controversies – and why don’t you agree with JS Mill that these disagreements flush out our epistemic fallibility?

DC: One obvious complication in dealing with real-world, group disagreements is that the assessment of epistemic credentials becomes much messier. So the toy examples are intended to allow us to get at basic principles, but seeing how those principles apply in real situations will sometimes be very difficult. There’s also one important factor that allows us to take certain one-on-one disagreements less seriously: what Jennifer Lackey has called “personal information.” So for example, I may be quite sure that I haven’t taken acid recently, or been subject to more natural cognitive malfunctions, but I may be much less sure that the same is true of my friend. But in group disagreements, the informational asymmetry between what I know about folks on my side, and what I know about folks on the other side, gets vanishingly small. So dismissing disagreement based on personal information will not be possible.

I do think that Mill was right in citing an important epistemic benefit of disagreement. Because I’m fallible, I sometimes miss problems with my views that other people can see. By engaging seriously with those who disagree, I can sometimes find those problems. So, as Mill pointed out, if I have sincerely engaged with opposing arguments, and have not found problems, this gives me some reassurance that I haven’t screwed up—so it gives me one way of compensating for my epistemic fallibility.

But what I think Mill missed was an implication of our fallibility in cases of persistent disagreement, when people on both sides have taken his advice, and engaged seriously with the other side’s arguments. In such cases, another aspect of our fallibility shows itself—we make mistakes that we can’t correct by sincere engagement with opposing viewpoints. I think this happens a lot in philosophy. And in those situations, my having failed to persuade those on the other side counts as heavily as their having failed to persuade me. So while engaging with disagreement can provide me with some sorts of assurances, the mere fact of persistent disagreement can also undermine my rational confidence.

3:16:  I presume a Bayesian measurement of credences is important in discussing belief strength and the like. Can you say something about what Bayesian calculations do in this context – and would it matter if it turned out that humans weren’t actually Bayesian? I guess I’m wondering just why rational as a Bayesian is so overridingly important? Why not downgrade its importance a little (or even a lot) so that different ways of holding on to beliefs are allowed to flourish?

DC: I think that certain elements of the Bayesian picture (that belief comes in degrees, and that these degrees should obey the laws of probability) give us the best idealized picture of how logic constrains rational degrees of belief.

Of course, humans aren’t ideal Bayesian agents. And we almost never consciously go through probability calculations to arrive at our degrees of belief. So I don’t see the Bayesian rules as some sort of generally useful epistemic self-help manual. (I think the same goes for theories of deductive logic.) But if you’re more confident in P than in (P or Q), that’s a certain kind of epistemic problem with your beliefs. And probabilistic coherence gives a systematic account of that kind of problem. Is it important? I think it’s important in understanding rationality, insofar as we have degrees of confidence in claims. We may also have categorical beliefs, and nothing in Bayesianism says we shouldn’t.

Still, I actually have changed my mind about the importance of the Bayesian rules. I no longer believe that even ideally rational agents would have probabilistically kosher credences. That’s because even an ideally rational agent can get (misleading) evidence suggesting that she has screwed up cognitively. And responding in the most rational way to this sort of higher-order evidence may require her, for example, to have less credence in the conclusion of a valid argument than she has in the conjunction of its premises, which violates the rules of probability. So while I still think that probabilistic coherence correctly describes the rational pressure logic puts on degrees of belief, I no longer think that it correctly characterizes the most rational doxastic response to every possible situation.

3:16: And finally, are there five books you can recommend for the curious readers here at 3:16 that will take us further into your philosophical world?

DC: When I happily agreed to this interview, I neglected to consider how embarrassing this question would be! I mostly read articles rather than books, and when I think about what arguments and ideas I’ve been helped most by, or struggled hardest against, they are almost all ideas from articles. But if I think way back, there are books which did a lot to teach me how to do philosophy.

I borrowed a copy of Richard Taylor’s Metaphysics from my high school library, and was pretty blown away by seeing how one could think about questions such as the existence of god, or free will, in such a clear, rigorous way.

In college, I had a love/hate relationship with Quine’s Word and Object. On the one hand, I loved the writing, and what seemed then to me like precision and rejection of bullshit. On the other, the stuff on indeterminacy of translation really ticked me off. But I was thrilled that it was possible to push back against Quine in the same spirit of precision and bullshit-avoidance.

I was also very turned on by Brian Skyrms’s Choice and Chance. It’s really a textbook, but its treatments of inductive skepticism and Goodman’s grue puzzle thrilled me.

In overall outlook, I think I’m still a product of Saul Kripke’s Naming and Necessity


and Hilary Putnam’s earlier work. Is it OK if I count Putnam’s Mind, Language and Reality as a book, even though it’s really just collected, um, articles?

ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his second book here or his first book here to keep him biding!

End Times Series: the index of interviewees

End Time seriesthe themes