Epistemology, Probable Belief and Carnap

Interview by Richard Marshall


'Probability is very useful for thinking about all our beliefs. For example, it is natural to say that we are more certain of some things than others.'

'Bayesianism is a name for the combination of two very plausible theses. The first is that beliefs can be represented by probabilities... The second is a particular rule for updating when you get new information.'

'The Everettian interpretation predicts that this was certain to be observed. As the prediction was correct, the Everettian interpretation gets confirmed. The same goes for every possible observation, so the Everettian interpretation is always confirmed. So we get the crazy result that any observation confirms the Everettian interpretation. The problem is making sense of confirmation in a branching world.'

'The key feature of self-locating beliefs is that they can change truth-value. It was once true that it’s Monday, but is now false. It was once true that I’m in Australia, but is now false. This simple phenomenon causes problems for Bayesianism.'

'Suppose you are on a jury trying to decide whether conciliationism is rational. You think it is, but other jurors think it isn’t. Conciliationism says you should move towards their view, thus giving up conciliationism. So conciliationism tells you to reject conciliationism! It is self-defeating, which suggests it is not a principle of rationality, so should be rejected.'

'Suppose you are at the beginning of inquiry, with no evidence. Are there any constraints on what you should believe, or does anything go? I think there must be some constraints.'

'Reductionists hold that the best explanations are always in terms of lower level science e.g. physics. Non-reductionists hold that the best explanations are sometimes in terms of higher level sciences e.g. biology, psychology etc. But even if higher level explanations are better, it isn’t clear why'

'Natural selection operates through the survival of the fittest. Some object that ‘survival of the fittest’ is an a priori tautology, so can’t be a law of nature. My position is that it is an a priori tautology and also a law of nature.'

'The bootstrapping problem is a circularity problem for epistemology. Is it possible to know by looking that a table is red without prior knowledge that your vision is reliable?'

' I think psychologists have conclusively shown that we don’t reason probabilistically. But what is often overlooked is the remarkable fact that when these mistakes are pointed out, everyone sees that they are mistakes.'


Darren Bradley is interested in epistemology, philosophy of science and metaphysics. Here he discusses probability, Bayesian models, the Sleeping Beauty Problem for physics, self-location and Bayesianism, epistemic rules, priors and defeasibility, Carnap and metaphysics, good explanations in science and philosophy, squaring Humeanism and Empiricism, bootstrapping and self-knowledge and finally why it doesn't matter that we're not actually probabilistic psychologically.

3:16:  What made you become a philosopher?

Darren Bradley: It was so much fun. It was more a question of would anything stop me from being a philosopher. I picked up Smullyan’s ‘What is the name of this book?’ at about 13, my Dad bought me ‘Sophie’s World’ a year or two later, and I think I found ‘Labyrinths of Reason’ in the school library. I definitely wanted to do philosophy at university and thought it would be useful to combine it with economics at LSE. Taking philosophy classes was even better than I expected. Craig Callender taught the introductory philosophy course – focused on paradoxes like time travel and personal identity – and I remember watching him giggle his way through the lectures, unable to understand how thinking about these things was considered work. It was clear I was going to carry on doing philosophy for as long as possible, and I find it amazing that I’m still doing it more than 20 years later.

3:16: You’re interested in formal epistemology. One area of this is the application of probability to beliefs. But what is a probability? I mean, we can flip a fair coin a million times and it ought to be 50 /50 heads and tails but we’d be surprised if it actually was exactly wouldn’t we? It seems to rely on an actual completed infinite which our finite minds can’t have. So isn’t there something fictional – or metaphysical – about the whole notion of probability in that it in itself can’t be cashed out? And shouldn’t this be of concern?

DB: We need to distinguish different meanings of ‘probability’. The main distinction is between subjective probability (call it credence) and objective probability (call it chance). There’s no problem here for credence, which is what we get when we use probability theory to model the beliefs of agents.

The problem you raise is a problem for chance. But it is only a problem for a specific theory (the long-run frequency theory) of chance which says: ‘this coin has a 50% chance of landing heads’ means ‘this coin would land heads 50% out of an infinite number of flips’. I agree that’s problematic, and a reason to reject the long-run frequency theory of chance for a theory which reduces chance to actual events.

3:16:  So why might someone say that I believe something with a certain probability? What’s probability got to do with my believing something, why not just say I believe this and don’t believe that and have done?

DB: Because a lot would be left out. Do you believe a fair coin will land heads? No. Do you believe it will not land heads? No. Do you have no beliefs about it at all? No. What we need is partial belief. You kinda believe it. And probability is a way of talking about this partial, kinda belief.

Once we have it, probability is very useful for thinking about all our beliefs. For example, it is natural to say that we are more certain of some things than others. I believe that it will rain in England at some time in the next month. I also believe it will rain in England at some point in the next year. But I am much more certain of the second. We can model this using probability by saying that my probability for the second belief is higher.

And we often change our minds when we get new information. Once we have the tools of probability theory, this is also easy to model. So probability theory gives us a more sophisticated theory which can do everything that the simpler theory of belief/non-belief can do, but can do a lot more besides.

3:16: . Bayesian models are the way we try and regiment this idea of probable belief aren’t they. For the uninitiated, how do these work and in particular, what is Bayesian confirmation for given that some have argued that it has no use at all?

DB: Bayesianism is a name for the combination of two very plausible theses. The first is that beliefs can be represented by probabilities, as described above. The second is a particular rule for updating when you get new information. Imagine a Venn diagram with all the possibilities laid out. Your job is to discover which possibility is true i.e. where you are on the Venn diagram. Suppose you learn a new piece of evidence, E. How should you respond? Bayesians say you should zoom in on the area of the Venn diagram where E is true, eliminating areas where E is false. That’s basically it. You get a new set of probabilities for your beliefs once you zoom in, and these should be your probabilities if you learn E. This is called conditionalization.

A further thesis concerns the concept of confirmation. In ordinary language, we say that evidence confirms a hypothesis when it makes the hypothesis very likely. Bayesians have found a slightly different definition more useful, which says that evidence confirms a hypothesis when it increases the probability of that hypothesis.

I think we shouldn’t get hung up on which definition is correct. There are various slightly different concepts in the ballpark, and the question is not which concept is correct, but which concept is more useful. And on this point Peter Brössel and Franz Huber have argued that the Bayesian conception of confirmation has no use. Their main point is that in order to know whether some evidence E confirms hypothesis H, we have to know which person we are talking about and what their probabilities are concerning E and H. And if we know thatthen we already know everything relevant, and there is no work to be done by the Bayesian conception of confirmation.

This is an interesting challenge but I think it can be answered. The general point is that it is often useful to remove details. For example, suppose a grumpy orchestra conductor is annoyed because someone coughed. This is a good explanation of why he is annoyed. Now suppose we add the detail that the person who coughed is wearing glasses. Does this make the explanation better or worse? Worse. The glasses are an irrelevant detail which can distract us. The same goes for confirmation. Consider the claim that the 1919 photos of a solar eclipse confirmed the theory of relativity. To spell out the details of this claim, we would need to know which person we are talking about and what their probabilities were concerning the photos and relativity. Brössel and Huber are right that given these details, the italicized claim about confirmation would tell us nothing new. But we might not have these details. And even if we did, we often don’t care about the details. We’re interested in a less specific claim, something like ‘for most reasonable people in 1919, the photos increased their probability of the theory of relativity’. The Bayesian concept of confirmation is useful for expressing this claim.

3:16:  You’ve tackled the Sleeping Beauty and Everettian interpretation of quantum mechanics. Can you first set up the problem for us?

DB: According to the Copenhagen interpretation of quantum mechanics, when a quantum system is measured the wave function collapses to single value. According to the Everettian interpretation of quantum mechanics, when a quantum system is measured the universe branches and there is a branch for each possible value, so every possible outcome happens somewhere.

Copenhagen: Pre-measurement Cat alive or cat dead

                                                                        

                                                                  Cat alive

Everett: Pre-measurement

                                                

                                              

                                                                  Cat dead

But the prediction that everything happens creates a problem regarding confirmation. Suppose you perform a measurement and observe that, say, Schrodinger’s cat is alive. The Everettian interpretation predicts that this was certain to be observed. As the prediction was correct, the Everettian interpretation gets confirmed. The same goes for every possible observation, so the Everettian interpretation is always confirmed. So we get the crazy result that any observation confirms the Everettian interpretation. The problem is making sense of confirmation in a branching world.

The Sleeping Beauty problem has a similar structure. Suppose Sleeping Beauty will be put under anesthetic and woken either just on Monday (One Waking), or on both Monday and Tuesday (Two Wakings), depending on the flip of a fair coin. By Tuesday the memory of a waking on Monday is erased, so Beauty cannot tell them apart.

Woken once: Sunday               Monday

                                                            Monday

Woken Twice: Sunday.    

                                                                                                       

                                                           Tuesday

Beauty’s situation on being woken is analogous to the situation after a branching has occurred (but before you look at the measurement). The Copenhagen interpretation is analogous to One Waking, the Everettian interpretation is analogous to Two Wakings. When Sleeping Beauty is woken does she acquire evidence about how many wakings there are? Some people (‘thirders’) think that when woken, Beauty should increase her credence in Two Wakings. Think of this as confirmation of the ‘more observations’ possibility. This is analogous to always increasing your credence in the Everett interpretation after a branching has occurred, and I think this is problematic.

3:16: And how do you deal with it?

DB: Going back to Everett, if the total evidence were ‘someone somewhere observes a live cat’ then this evidence would indeed confirm the Everett interpretation. But I think the total evidence is not just ‘someone somewhere observes a live cat’ but ‘the observer on this branch observes a live cat’ i.e. ‘I observe a live cat’, where ‘I’ refers only to the observer on a particular branch, not to the person in the other branch where the cat is dead. We have to relativize the evidence to the branch. After all, we do not have a God-like perspective where we see what happens on every branch, we only see what happens on our branch. The Everettian interpretation does not predict that I, in this branch, observe that the cat is alive, so there is no automatic confirmation of the Everettian interpretation.

The same point applies to Sleeping Beauty. Some arguments that Sleeping Beauty should increase credence in Two Wakings would also be arguments that observing an alive cat should increase credence in the Everettian interpretation. As this is implausible for quantum mechanics, it also undermines the arguments in Sleeping Beauty. And I think the arguments go wrong for analogous reasons. Any evidence that Sleeping Beauty gets has to be relativized to the day. Epistemically speaking, Beauty on Monday and Beauty on Tuesday are on their own. Overall I think that Beauty does not get evidence for Two Wakings (the ‘halfer position’).

3:16:  Why do self locating beliefs cause problems for Bayesianism, and in particular, why doesn’t Miriam Schoenfield’s proposal that that on learning x, agents should update on the fact that they learned x, not fix the problem?

DB: Self-locating beliefs are exactly what they sound like – they locate the agent. Beliefs about which branch you are on in an Everettian world are self-locating, but there are much more familiar examples e.g. the blue dot on Google maps tells me where I am located in the world. The key feature of self-locating beliefs is that they can change truth-value. It was once true that it’s Monday, but is now false. It was once true that I’m in Australia, but is now false. This simple phenomenon causes problems for Bayesianism.

Let’s go back to the Venn diagram. You occupy one point in that space and want to find out which one it is. Usually new evidence comes in and you eliminate possibilities until eventually you are left with one possibility, the actual/true possibility. But that assumes your position on the Venn diagram is fixed, which it isn’t if we include self-locating beliefs. We can imagine the areas of the Venn diagram are areas of geographic space. Moving from one country to another moves you from one region on the Venn diagram to another. You’re not a stationary point, you’re more like an ant walking around. So now zooming in on one point of the Venn diagram does not get you to the truth, as you might have moved. So the standard Bayesian picture where you eliminate possibilities and zoom in on the truth doesn’t work. I think we need a totally new kind of belief update for self-locating beliefs, and tweaking the standard story (conditionalization) won’t work.

As you say, Miriam Schoenfield proposed that on learning x, agents should update on the fact that they learned x. For example, suppose you learn that it is Monday. The standard Bayesian approach would be that you should update on ‘it is Monday’, which runs into the problems above – by the time you’ve zoomed in it might not be Monday any more. Schoenfield proposed that you should also update on ‘I learn that it is Monday’.

But I don’t think that addresses the central problem that self-locating beliefs change in truth-value. The agent is being told to update on something extra(I learn x) but that doesn’t solve the problem that x might change from being true to being false.

3:16:  Many of us talk about having beliefs we ought to have even if it means ignoring perceptions or ignoring the testimony of my peers or apparent epistemic rules we all hold. Are all epistemic rules defeasible and if they are does that mean there are no simple rules connecting descriptive and normative facts?

DB: Yes and yes. This grew out of the debate on disagreement. Suppose we are on a jury and I think the defendant is guilty while you think defendant is not guilty. We establish that we have all the same evidence and are just as good at assessing it. Should we conciliate and move towards the other’s position or stick to our guns? Common sense says, and I agree, that we should conciliate, at least to some extent. But what if we’re arguing about disagreement itself. Suppose you are on a jury trying to decide whether conciliationism is rational. You think it is, but other jurors think it isn’t. Conciliationism says you should move towards their view, thus giving up conciliationism. So conciliationism tells you to reject conciliationism! It is self-defeating, which suggests it is not a principle of rationality, so should be rejected.

Adam Elga defended conciliationism by arguing that it does not apply to itself. That is, we should conciliate about everything other than conciliation. When it comes to conciliation, we should believe it, and continue to believe it even if other people say we shouldn’t. So even if a team of experts say that conciliationism is false, the rational thing to do is ignore them and continue to believe conciliationism. This is a pretty strange position.

I have a different way to defend conciliationism. We know from work on objective chance that you should usuallymatch your beliefs to the objective chances, but there are exceptions when you have special evidence. For example, your probability that a fair coin lands Heads should be 50%, except when you have direct evidence about how the coin will land e.g. if you have a reliable crystal ball showing it land Heads, your probability should be much higher. So the relevant principle is hedged e.g. match your probabilities to the objective chances unless you have evidence that’s more informative than the objective chances.

The same goes for conciliationism. You can get evidence telling you not to worry what your peers say – and such evidence can come from your peers! The full conciliationist principle is a hedged principle, something like ‘give the views of peers equal weight, unless you have evidence that you shouldn’t’. That principle does not get undermined by new evidence. But simple unhedged principles like ‘always match your probabilities to the objective chances’ or ‘always give the views of peers equal weight’ are false.

This point extends to other rational principles, and also to ethical principles. Consider ‘if it looks red then believe it is red’. This is not true in general, as you might know that the lighting makes everything look red. The full principle is something like ‘if it looks red then believe it is red, unless you have evidence that red appearances are misleading’. For an ethical principle, consider ‘don’t lie’. This is not true in general, as lying might be the only way to prevent disaster. The full principle is something like ‘don’t lie unless lying is the best way to prevent disaster’.

The result is a vast tapestry of interlocking hedged rules. It’s messy, but I think it’s the only tenable path. And it would also explain the evidence that philosophers have not made much progress in working out the true principles of rationality and ethics.

3:16:  Bayesianism has the notion of priors. Is it important to constrain them, and if it is, how should we do so? And, following from the last question, wouldn’t any constraining rule be defeasible anyway?

DB: Suppose you are at the beginning of inquiry, with no evidence. Are there any constraints on what you should believe, or does anything go? I think there must be some constraints. If not, then there can be no objection to being confident a priori that the entire universe contains no more and no less that one pink elephant. And there could be no objection to taking the appearance of a red table to confirm that the entire universe contains no more and no less that one pink elephant.

Weirdly, rational constraints were out of favour for much of the 20thcentury. The reason seems to be partly the later Carnap’s attempt to work out a detailed, clear, general theory of what the rational constraints must be. The plausibility of a thesis is inversely proportional to the clarity with which it is expressed,[1]so people immediately started picking holes in Carnap’s system and the whole approach fell into disrepute. But this was throwing out the baby with the bathwater. People seemed to think that that there are either simple principle of rationality, or no principles at all. As I say above, I think the principles of rationality should be thought of as a complicated interlocking tapestry.

Wouldn’t any constraining rule be defeasible anyway? Again, the rational constraints are only constraints on priors i.e. what you should believe before you get any evidence. They won’t say that you should believe something whatever evidence you get. So the simple rules like ‘if it looks red then believe it is red’ would be reasonable principles to be encoded in the priors. We shouldn’t take this to mean ‘in all situations, if it looks red then believe it is red’. We should allow possible evidence E such that ‘given E, if it looks red then believe it is blue’.

3:16:  You defend Carnap’s rejection of metaphysics, not on the grounds of any verificationism but as a matter of epistemology. Can you sketch for us what you take to be Carnap’s position and why you say metaphysics is always going to be in trouble because of the epistemic challenge of its lack of justification even if it can counter the challenge from semantics?

DB: Actually I don’t think that metaphysics is always going to be in trouble. But let’s back up.

The early Carnap rejected metaphysical claims as nonsense. This is partly because of his verificationism, but the underlying point is that he thought no evidence could support a metaphysical claim. This is one way of understanding empiricism – that all justification comes from the senses. If the senses cannot provide justification for metaphysical claims, then metaphysical claims cannot be justified. (He spells this out in Pseudo-Problems in Philosophy 1928, perhaps the most important overlooked work of the 20thcentury.)

But I think empiricism in this strong sense is untenable. What justifies logic? How do we know that A&B entails B? We might try to appeal to the senses, but we would have to make some inferences to move from beliefs about our senses to beliefs about logic, and it is such inferences that are in doubt. Without a priori principles the only justifiable claims would be claims about our sense data.

So I think we need a priori logical principles. Once we have them, it is a small step to a priori probabilistic principles e.g. if you’ve seen a million black ravens, then the next raven is probably black. These were the principles Carnap tried to develop in his later life, so I think he recognized the limitations of his early view. But how might we justify one principle over another? e.g.

if you’ve seen a million black ravens, the next raven is probably black

vs.

if you’ve seen a million black ravens, the next raven is probably white.

Clearly the first is justified and the second is not. Why? (This is Goodman’s ‘grue’ challenge.) We can’t appeal to evidence here, as these are a priori principles. We need to appeal to theoretical virtues, and I think these ultimately come down to simplicity. Simplicity is out of fashion, and has suffered in the same way as Carnap’s system has – the detailed attempts to spell out a theory have had holes poked in them, so people have rejected the entire approach, with more babies thrown out into the streets. But some concept of simplicity is needed. I think David Lewis’s natural properties are simple in the relevant sense, which connects us to the contemporary interest in grounding and fundamentality.

And this brings us to why I don’t think metaphysics is always going to be in trouble. We need to use a concept of simplicity in order to make inferences from our senses to scientific claims. And if so, we can use this concept of simplicity to make inferences to metaphysical claims. In many metaphysical debates there is an appeal to simplicity. It is often implicit, but it is usually there, and offers hope for resolving the debate. Of course this requires answering lots of difficult questions about simplicity, but I think these questions are answerable.

3:16:  You’ve looked at what makes a good explanation in both science and philosophy. One suggestion is that an explanation is good because it omits details and so is simpler. Do you agree, and is a philosophical theory to be held to the same standards as a scientific one?

DB: Suppose someone takes an umbrella. What’s the best explanation for this event? Compare a psychological explanation with a physical explanation:

Psychological explanation: They wanted to stay dry.

Physical explanation: Their body consisted of molecule m1 in position p1 with velocity v1, molecule m2 in position p2 with velocity v2… and so on for every molecule of their body.

There seems to be something better about the psychological explanation. But what makes the psychological explanation better? Usually, the more detailed an explanation, the better. The psychological explanation has fewer details, so it’s puzzling that it seems better.

My suggestion is that the psychological explanation is better to the extent that it contains moreinformation. There is a sense in which the explanation in terms of molecules is very restricted. It won’t apply in a scenario where even a single molecule is in a different position. It tells you only what happens in the exact scenario where all the molecules are exactly as described. In contrast, the psychological explanation is not restricted. All sorts of physical possibilities are compatible with the desire to stay dry, and the psychological explanation explains why the umbrella will be taken in all of them. So in my view there is nothing necessarily good about higher level explanations, it’s just that they are sometimes more informative. But really the best explanation would provide information at the level of physics about what would happen in every possibility.

We can use the example above of the annoyed conductor. We might explain his annoyance with ‘if someone coughs then the conductor gets annoyed’ (plus someone coughing). Suppose the person who coughed had glasses, and we offer the following explanation ‘if someone with glasses coughs then the conductor gets annoyed’. This is also true, but is a worse explanation. And I think it is a worse explanation because it is less informative. It only tells us what happens if someone with glasses coughs; it doesn’t tell us what happens if someone without glasses coughs. Maybe the conductor gets annoyed, maybe he doesn’t. The original explanation ‘if someone coughs then the conductor gets annoyed’ provides the extra information. Even better would be ‘if someone makes a noise then the conductor gets annoyed’. By removing details from the antecedent, the explanation becomes more informative.

The same applies to higher level sciences in general. Suppose we explain the cracking of some glass in terms of the high temperature. We might offer a more detailed lower level explanation ‘molecule m hit point p at 1000 metres per second’. This is also true, but is a worse explanation because in some ways it is less informative. It doesn’t tell us what would have happened if some other molecule had hit point p at 1000 metres per second.

This is relevant to the debate debate in philosophy of science about the relation between higher level and lower level explanations. Reductionists hold that the best explanations are always in terms of lower level science e.g. physics. Non-reductionists hold that the best explanations are sometimes in terms of higher level sciences e.g. biology, psychology etc. But even if higher level explanations are better, it isn’t clear why. I think all we need is the assumption that logically stronger, more informative explanations are better. Higher level explanation are often more informative, and that’s why they’re sometimes better than lower level explanations.

3:16:  Some philosophers take a Humean metaphysics to make claims of a priori causal laws in biology impossible. But you take issue with this and attempt to show that Humean metaphysics and empiricist epistemology can be squared don’t you? Can you take us through your thinking here – and does this leave us with a priori causal laws in loads of sciences?

DB: Yes I think that a debate in recent philosophy of biology was solved by Carnap. The point can be made using the slogan that natural selection operates through the survival of the fittest. Some object that ‘survival of the fittest’ is an a priori tautology, so can’t be a law of nature. My position is that it is an a priori tautology and also a law of nature.

This happens all the time with functional words, which are words defined in terms of what the referent does. This is the dominant view in the philosophy of mind, where functionalists say it doesn’t matter what brains are made of, what matters is what beliefs do. Very roughly, if it causes you to take the umbrella, then it is a belief that it’s raining.

For a simpler example, think about carburetors. What it is to be a carburetor is to be a device for mixing air and fuel. Consider ‘the carburetor mixes the air and fuel’. That’s an a priori tautology, as it follows from the definition of ‘carburetor’. But it can still be a law, and it can still be explanatory. If I don’t know that there is a device that mixes air and fuel, or don’t know that air is needed for the engine to work, it could be explanatory to be told ‘the carburetor mixes the air and fuel’. Similarly, if I don’t know whether someone took the umbrella while sleepwalking, or while being controlled by an evil wizard, then it can be explanatory to be told that they took the umbrella because they believed it was raining. Returning to biology, if I don’t know whether species were created by a wizard, or have always been here, it can be explanatory to be told that they are here due to the survival of the fittest.

Some object that these functional laws are incompatible with Humean metaphysics, which denies that there are necessary connections between cause and effect. But the only necessary connections we need are between words. Something only deserves the name ‘carburetor’ if it mixes air and fuel. It does not follow that carburetors have some inner nature that necessitates that they mixes air and fuel.

And some object that these functional laws are incompatible with empiricism – don’t we need some mysterious rational insight to detect necessary connections? Not if we only posit necessary connections between words i.e. analytic connections. We can get these by simply defining words that have these analytic connections.

3:16:  What’s the problem raised by ‘bootstrapping’ and ‘self-knowledge’ and what do ‘relevant alternative’ solutions claim to do to solve them? Do they and in doing so do they answer skepticism about the external world?

DB: Let’s start with relevant alternatives theories of knowledge. The idea is that you don’t simply know that p; you know that p rather than q. For example, you don’t know that grass is green; you know that grass is green rather than white, black, red etc. This theory is useful for answering skepticism about the external world. The skeptic points out that you might be a brain-in-a-vat, and infers that you don’t know you have hands. The relevant alternatives theorist can acknowledge a sense in which the skeptic is right – you don’t know you have hands rather than being a brain-in-a-vat. However, you know you have hands rather than having hooks. We get shifting intuitions because the relevant alternatives can shift in the course of a conversation. I argued that as well as answering skepticism about the external world, relevant alternatives theories of knowledge can also solve the ‘bootstrapping’ and ‘self-knowledge’ problems.

The bootstrapping problem is a circularity problem for epistemology. Is it possible to know by looking that a table is red without prior knowledge that your vision is reliable? If not, skepticism threatens, as we don’t seem to have prior knowledge that our vision is reliable. So at some point it looks like a source must be able to provide knowledge without prior knowledge that the source is reliable.

But then we seem to be able to bootstrap our way to knowledge that a source is reliable just by using it. For example, suppose S knows by looking that a table is red without prior knowledge that their vision is reliable. So, S knows the table is red. S also obviously knows that the table looks red. So S knows that the table looks red and that it is red. And if the table looks the way it is then it follows that S’s vision is reliable. As S can work all this out, S can come to know that their vision is reliable. But that can’t be right, as all they’ve done is look at a table! This knowledge that their vision is accurate has arrived suspiciously easily. (There is ‘easy confirmation’, just as with Sleeping Beauty and the Everett interpretation, but for different reasons.)

I think the problem can be solved if we introduce relevant alternatives. Instead of knowing that the table is red, S knows that the table is red rather than, say, white. But the possibility of a white table has been eliminated only by assumingthat S’s vision is reliable. In some contexts it’s acceptable to make this assumption and then it follows that S knows that the table is red. But we cannot infer that S knows their vision is reliable. S ‘knows’ that their vision is reliable only in the degenerate sense that S’s vision is assumed to be reliable. This is not genuine knowledge.

Moving on, the problem of self-knowledge here is based on Putnam/Kripke style semantic externalism, where the concept of water is individuated by reference to H20. Making various assumptions, if we know what we’re thinking about a priori then we can know a priori that H20 exists. That’s crazy.

I suggest we block the argument by distinguishing different senses of knowing what one is thinking about. We can know a priori that we are thinking about water rather than wine, but we cannot know a priori that we are thinking about water rather twin-water (which looks like H20 but has a different chemical composition).

3:16:  What’s the relationship between Bayesian and probabilistic thinking and our actual thinking processes? Is it just a model about how to think about our beliefs or is it a bigger claim that we are actually Bayesian creatures. Would it matter if biologists found that we didn’t reason probabilistically?

DB: I think psychologists have conclusively shown that we don’t reason probabilistically. But what is often overlooked is the remarkable fact that when these mistakes are pointed out, everyone sees that they are mistakes. This means that we all have an implicit model of how we ought to reason. Bayesian models are a suggestion for how we ought to reason, a suggestion that has been incredibly productive, both in scientific disciplines which model humans and design artificial intelligence, and in philosophy, where we model ideal agents.

But I think even Bayesian agents are not ideal in some ways. A problem for Bayesianism is that probabilistic agents must satisfy the laws of probability, which requires being certain of maths, logic and the relation between evidence and hypothesis. Would rationally ideal agents be like this? I say no. Rational agents need not be certain that they are rationally ideal. They should allow for the possibility that they have made a mistake in their reasoning. One of my current projects is working out how much ideal agents depart from Bayesianism, and what the consequences are.

3:16:  And finally, for the curious readers here at 3:16, are there five books you can recommend to take us further into your philosophical world?

DB: I’ll start with lesser known works which might help people get into Carnap. 

The place to start is with his replies in The Library of Living Philosophers Volume XI: The Philosophy of Rudolph Carnap edited by Schilp. 

I also mentioned above Pseudo-Problems in Philosophy which contains Carnap’s empiricist arguments against metaphysics. 

For a broader view, Coffa’s ‘The Semantic’ Tradition’ helped me understand what Carnap was responding to. 

For a contemporary development of similar ideas, David Chalmers’ ‘Constructing the World’ is the place to go. 

For a discussion of how we could decide between metaphysical theories I’d recommend Jiri Benovsky’s ‘Meta-metaphysics: Metaphysical equivalence, primitiveness, and theory choice’.


ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his second book here or his first book here to keep him biding!

End Times Series: the index of interviewees

End Time seriesthe themes



[1]A line I’ve stolen from Alberto Coffa’s ‘The Semantic Tradition’ I think.