In a recent article, Adam Elga (2004) outlines a strategy for “Defeating Dr Evil with Self-Locating Belief”. The strategy relies on an indifference principle that is not up to the task. In general, there are two things to dislike about indifference principles: adopting one normally means confusing risk for uncertainty, and they tend to lead to incoherent views in some ‘paradoxical’ situations. Each kind of objection can be levelled against Elga’s theory, but because Elga is more careful than anyone has ever been in choosing the circumstances under which his indifference principle applies we have to be similarly careful in focussing the objections. Even with this care the objections I put forward here will be less compelling than, say, the objections Keynes (1921 Ch. 4) put forward in his criticisms of earlier indifference principles. But there still may be enough to make us reject Elga’s principle. The structure of this note is as follows. In and 2 I set out Elga’s theory, in and 4 I discuss some initial objections that I don’t think are particularly telling, in I discuss some paradoxes to which Elga’s theory seems to lead (this is reprised in where I discuss a somewhat different paradoxical case) and in and 8 I argue that even Elga’s careful indifference principle involves a risk/uncertainty confusion.
1 From Basel to Princeton
In (1979) David Lewis argued that the contents of contentful mental states were not propositions, but properties. When I think that I’m a rock star, I don’t attribute truth to the proposition Brian is a rock star, but rather attribute the property of rock stardom to myself. Lewis was led to this position by considering cases where a believer is mistaken about his own identity. For example, if I believe that I’m a rock star without believing that I’m Brian, and in fact while thinking that Brian is an infamous philosopher, it is odd to attribute to me belief in the proposition Brian is a rock star. But it is perfectly natural to say I self-attribute rock stardom, and that’s just what Lewis says.
If we accept Lewis’s position, there are two paths we can take. First, we can try simply replacing all talk of propositional attitudes with talk of proprietal attitudes, and trusting and hoping that this won’t make a difference to our subsequent theorising. Alternatively, we can see if changing the type of entity that is the content of a contentful state has distinctive consequences, and in particular see if it gives us the conceptual resources to make progress on some old problems. That’s the approach Adam Elga has taken in a couple of papers, and whatever one thinks of his conclusions, the early returns certainly suggest that this Lewisian outlook will prove remarkably fruitful.
On the Lewisian approach, credences are defined over properties, and properties are sets of possibilia, i.e. centred worlds. Some properties are maximally precise, they are satisfied by exactly one possible object. Elga sometimes calls these maximally specific properties predicaments because they specify exactly what is happening to the agent that instantiates one. Say predicaments F1 and F2 are similar iff the F1 and the F2 are worldmates and their experiences are indistinguishable. Elga’s principle INDIFFERENCE says that if predicaments F1 and F2 are similar then any rational agent should assign equal credence to F1 and F2. This becomes most interesting when there are similar F1 and F2. So, for instance, consider poor O’Leary.
- O’LEARY
- O’Leary is locked in the trunk of his car overnight. He knows that he’ll wake up briefly twice during the night (at 1:00 and again at 2:00) and that the awakenings will be subjectively indistinguishable (because by 2:00 he’ll have forgotten the 1:00 awakening). At 1:00 he wakes up.
Elga says that when O’Leary wakes up, he should assign equal credence to it being 1:00 as to it being 2:00. So, provided O’Leary knows that one of these two hypotheses is true, INDIFFERENCE says that he should assign credence 1/2 to it being 1:00 at the wake up.
Elga has an argument for INDIFFERENCE, which we shall get to by , but for a while I will look at some immediate consequences of the position. I’ll start with two reasons to think that INDIFFERENCE needs to be strengthened to play the role he wants it to play.
2 Add it Up
One difficulty with INDIFFERENCE as stated so far is that it applies only to very narrow properties, predicaments, and it is not clear how to generalise to properties in which we are more interested.
- BERNOULLIUM
- Despite months of research, Leslie still doesn’t know what the half-life of Bernoullium, her newly discovered element is. It’s between one and two nanoseconds, but she can’t manufacture enough of the stuff to get a better measurement than that. She does, however, know that she’s locked in the trunk of her car, and that like O’Leary she will have two indistinguishable nocturnal awakenings. She’s having one now in fact, but naturally she can’t tell whether it is the first or the second.
INDIFFERENCE says that Leslie should assign credence 1/2 to it being the first wake-up, right? Not yet. All that INDIFFERENCE says is that any two predicaments should receive equal credence. A predicament is maximally specific, so it specifies, inter alia, the half-life of Bernoullium. But for any x, Leslie assigns credence 0 to x being the half-life of Bernoullium, because there are uncountably many candidates for being the half-life, and none of them look better than any of the others. So she assigns credence 0 to every predicament, and so she satisfies INDIFFERENCE no matter what she thinks about what the time is. Even if, for no reason at all, she is certain it is her second awakening, she still satisfies INDIFFERENCE as it is written, because she assigns credence 0 to every predicament, and hence equal credence to similar predicaments.
Fortunately, we can strengthen INDIFFERENCE to cover this case. To start, note that the motivations for INDIFFERENCE suggest that if two predicaments are similar then they should receive equal credence not just in the agent’s actual state, but even when the agent gets more evidence. Leslie should keep assigning equal credence to it being her first or second wake up if she somehow learns what the half-life of Bernoullium is, for example. This suggests the following principle:1
1 INDIFFERENCE entails C-INDIFFERENCE given the following extra assumptions. First, if INDIFFERENCE is true it is indefeasible, so it must remain true whatever one’s evidence is. Secondly, rational agents should update by conditionalisation. Thirdly, it is always possible for an agent to get evidence that tells her she is in F1 or F2 and no more. The third premise is at best an idealisation, but it is hard to see how or why that should tell against C-INDIFFERENCE.
- C-INDIFFERENCE
- If F1 and F2 are similar, and an agent does not know that she is in neither, then her conditional credence on being F1, conditional on being either F1 or F2, should be 1/2.
But even this doesn’t quite resolve our problem. Simplifying Leslie’s situation somewhat, the live predicaments are all of the following form: this is the first/second awakening, and the half-life of Bernoullium is x. C-INDIFFERENCE requires that for any c, conditional on the half-life of Bernoullium being c, Leslie assign credence 1/2 to it being her first awakening. From this and the fact that Leslie’s credence function is a probability function it doesn’t follow that her credence in this being her first awakening is 1/2. So to get INDIFFERENCE to do the work it is meant to do in Leslie’s case (and presumably O’Leary’s case, since in practice there will be some other propositions about which O’Leary is deeply uncertain) I think we need to strengthen it to the following.
- P-INDIFFERENCE
- If G1 and G2 are properties such that:
- For all worlds w, there is at most one G1 in w and at most one G2 in w;
- For all worlds w, there is a G1 in w iff there is a G2 in w; and
- For all worlds w where there is a G1 in w, the G1 and the G2 have indistinguishable experiences; then
G1 and G2 deserve equal credence.
Elga does not endorse either C-INDIFFERENCE or P-INDIFFERENCE, but I suspect he should given his starting assumptions. It is hard to believe if O’Leary is certain about everything save what time it is, then rationality imposes very strong constraints on his beliefs about time, while rationality imposes no such constraints should he (or Leslie) be uncertain about the half-life of Bernoullium. Put another way, it is hard to believe that in her current state Leslie could rationally assign credence 0.9 to this being her first awakening, but if she decided the half-life of Bernoullium is 1.415 nanoseconds, then she would be required to change that credence to 0.5. If we have INDIFFERENCE without P-INDIFFERENCE, that is possible. So I will assume in what follows that if C-INDIFFERENCE and P-INDIFFERENCE are false then INDIFFERENCE is heavily undermined.2
2 Note also that if P-INDIFFERENCE is false, then Dr Evil has an easy way out of the ‘brain race’ that comes up at the end of Elga’s paper. He just need be told about some new element without being told its half-life, and magically he is free to assign credence 1 to his being on the spaceship rather than on Earth. This would reduce the interest of the puzzle somewhat I fear.
3 Out of sight, out of mind
Elga’s discussion presupposes two kinds of internalism. First, he assumes that some internalist theory of experience is true. Second, he assumes that some internalist theory of justification is true. If the first assumption is false it threatens the applicability of the theory. If the second assumption is false it threatens the truth of the theory.
An externalist theory of experience says that what kind of experience S is having is determined, inter alia, by what S is experiencing. While setting out such a view, John Campbell (2002, 124–26) says that two people sitting in duplicate prison cells looking at duplicate coffee cups will have different experiences, because one will have an experience of the coffee cup in her hand, and the other will not have an experience of that cup. This does not threaten INDIFFERENCE, but it does seem to render it trivial. On Campbell’s view, if two agents are able to make demonstrative reference to different objects, and there is no reason to think Elga’s agents in allegedly similar but not numerically identical predicaments cannot, they are having different experiences. Hence the situations are not really similar after all. Strictly speaking, this is good news for INDIFFERENCE, since it is hard given this view of experience to find counterexamples to it. But I doubt that Elga will be happy with this defence.
The second kind of internalist assumption is more threatening. Many externalists about justification think whether a particular experience justifies a belief for an agent depends not just on intrinsic features of that experience, but on the relationship between experiences of that kind and the world around the agent. In some versions of this, especially the version defended by Timothy Williamson (1998), whether an experience either constitutes or produces evidence depends on whether it constitutes or produces knowledge. Since it is not clear that any two similar agents know the same thing, since it is clear that they do not have the same true beliefs, on Williamson’s theory it seems that the agents will not have the same evidence. In particular, it is possible that part of one agent’s evidence is inconsistent with her being the other agent. If part of her evidence is that she has hands, then she is not a brain-in-a-vat having experiences like hers, and she should not assign high credence to the claim that she is one, no matter what INDIFFERENCE says. So Elga needs to reject this kind of externalism about evidence. This is not a devastating objection. I am sure that Elga does reject Campbell’s and Williamson’s theories, so just raising them against him without argument would be question-begging. But this does mean that the target audience for INDIFFERENCE is smaller than for some philosophical claims, since adherents of Campbell’s or Williamson’s views will be antecedently disposed to think INDIFFERENCE is useless or false.
4 It’s Evidently Intransitive
Dakota is sitting in a bright green room. She is trying to reconstruct how she got there when Dr Evil informs her just what happened. An epistemology student, not coincidentally called Dakota, was snatched out of her study and duplicated 999 times over. The duplicates were then numbered (though we’ve lost which number was given to the original) each put in a coloured cell. The thousand coloured cells rotated slowly through the colour sphere, starting with cell 0 (the new home of Dakota number 0) being green, going blueish until cell 250 (for Dakota number 250) is just blue, then reddish until cell 500 is just red, swinging through the yellows with pure yellow reached at 750, and then back to the greens, with 999 being practically identical to 1000. For any n, cells number n and n+1 are indistinguishable. That means that Dakota number n is similar, in Elga’s sense, to Dakota number n+1, for their (apparent) experiences before being in the rooms are identical, and their experiences in the rooms are indistinguishable. Hence our Dakota, sitting in the bright green room, should assign equal credence to being Dakota number n and Dakota number n+1 for any n. But this is absurd. Since she can see that her walls are green, she should assign high credence to being Dakota number 0, and credence 0 to being Dakota number 500.
The problem here is that Elga wants to define an equivalence relation on predicaments, the relation deserving the same credence as, out of an intransitive relation, being indistinguishable from. There are two possible responses, each of them perfectly defensible.
First, Elga could deny the premise that the adjacent cells are indistinguishable. Although there is some prima facie plausibility to the claim that some different colours are indistinguishable, Delia Graff Fara (2001) has argued that this is false. It would mean committing to yet another controversial philosophical position, but if Elga endorsed Graff’s claims, he could easily deal with Dakota.
Secondly, he could tinker with the definition of similarity. Instead of saying that possibilia represent similar predicaments iff they are indistinguishable worldmates, he could say that they represent similar predicaments iff they are worldmates that are indistinguishable from the same predicaments. (This kind of strategy for generating an equivalence relation from an intransitive relation is borrowed from Goodman (1951).) Even if adjacent cells are indistinguishable from each other, they will not be indistinguishable from the same cells. This delivers the plausible result that the duplicate Dakotas stuck in the cells do not instantiate similar predicaments. Some might object that this move is ad hoc, but once we realise the need to make similar an equivalence relation, it seems clear enough that this is the most natural way to do that.
5 Morgan and Morgan and Morgan and Morgan
I think I outdid myself this time, said Dr Evil. I was just going along duplicating you, or at least someone like you, and the duplication process was taking less and less time. So I thought, I wonder what is the lower bound here? How quick can we make the duplication process? So I tried a few things to cut down the time it took, and I got a little better with practice, and, well, it turns out that the time taken can be made arbitrarily small. Before I knew it, there were infinitely many of you. Oops.
Morgan was a little shocked. She could cope with having a duplicate or two around, but having infinitely many duplicates was a little hard to take. On the other hand, and this was hard to think about, perhaps she should be grateful. Maybe she was one of the later ones created, and she wouldn’t have existed if not for Evil’s irrational exuberance. She started to ponder how likely that was, but she was worried that it required knowing more about Evil than any mortal could possibly know.
Well, continued Dr Evil, I did one thing right. As each duplicate was created I gave it a serial number, 0 for the original Morgan, 1 for the first duplicate and so on, so the bookkeeping will be easier. Don’t go looking for it, it’s written on your left leg in ectoplasmic ink, and you won’t be able to see it.
Now that makes things easier, thought Morgan. By INDIFFERENCE the probability that my serial number is x is 1/n, where n is the number of duplicates created. So dividing 1 by infinity, that’s zero. So the probability that my serial number is less than x is the probability that it’s zero plus the probability that it’s one plus … plus the probability that it’s x, that’s still zero. So if he had stopped after x for any x, I would not exist with probability one. I’m liking Evil more and more, though something bothers me about that calculation.
Morgan was right to worry. She’s just talked herself, with Elga’s help, into a violation of the principle of countable additivity. The additivity axiom in standard probability theory says that for any two disjoint propositions, the probability of their disjunction is the sum of their probabilities. The countable additivity axiom says that for any countable set of disjoint propositions, the probability that at least one of them is true is the sum of each of their probabilities. (It follows from the axioms of probability theory that this sum is always defined.) Here we have to alter these axioms slightly so they apply to properties rather than propositions, but still the principle of countable additivity seems plausible. But Morgan has to violate it. The probability she assigns to having some serial number or other is not zero, in fact it is one as long as she takes Evil at his word. But for each x, the probability that her serial number is x is zero. In symbols, we have
- Pr(\({\exists}\)x (Serial number = x)) = 1
- \({\Sigma}\)Pr(Serial number = x) = 0
But countable additivity says that these values should be equal.
Orthodoxy endorses countable additivity, but there are notable dissenters that are particularly relevant here. Bruno Finetti (1974) argued that countable additivity should be rejected because it rules out the possibility of an even distribution across the natural numbers. DeFinetti thought, as Morgan does, that we could rationally be in a position where we know of a particular random variable only that its value is a non-negative integer, and for every x, we assign equal probability to each hypothesis that its value is x. Since that is inconsistent with countable additivity, all the worse for countable additivity. This is a decent argument, though as de Finetti himself noted, it has some counterintuitive consequences.
I decided, Dr Evil continued, to do something fairly spectacular with all these people. By some small tinkering with your physiology I found a way to make you immortal. Unfortunately, a quick scan of your psychology revealed that you weren’t capable of handling eternity. So every fifty years I will wipe all your memories and return you to the state you were in when duplicated. I will write, or perhaps I did write, on your right leg the number of times that your memories have been thus wiped. Don’t look, it’s also in ectoplasmic ink. Just to make things fun, I made enough duplicates of myself so that every fifty years I can tell you what happened. Each fifty-year segment of each physical duplicate will be an epistemic duplicate of every other such segment. How cool is that?3
3 Evil’s plan resembles in many respects a situation described by Jamie Dreier (2001) in his “Boundless Good”. The back story is a little different, but the situation is closely (and intentionally) modelled on his sphere of pain/sphere of pleasure example.
Morgan was not particularly convinced that it was cool, but an odd thought crossed her mind once or twice. She had one number L written on her left leg, and another number R written on her right leg. She had no idea what those numbers were, but she thought she might be in a position to figure out the odds that L \({\geq}\) R. So she started reasoning as follows, making repeated appeals to C-INDIFFERENCE. (She must also appeal to P-INDIFFERENCE at every stage if there are other propositions about which she is uncertain. Assume that appeal made.)
Let’s say the number on my left leg is 57. Then L \({\geq}\) R iff R < 58. But since there are 58 ways for R < 58 to be true, and infinitely many ways for R < 58 to be false, and by C-INDIFFERENCE each of these ways deserve the same credence conditional on L = 57, we get Pr(L \({\geq}\) R L = 57) = 0. But 57 was arbitrary in this little argument, so I can conclude \({\forall}\)l: Pr(L \({\geq}\) R L = l) = 0. This seems to imply that Pr(L \({\geq}\) R) = 0, especially since I know L takes some value or other, but let’s not be too hasty.
Let’s say the number on my right leg is 68. Then L \({\geq}\) R iff L \({\geq}\) 68. And since there are 68 ways for L \({\geq}\) 68 to be false, and infinitely many ways for it to be true, and by C-INDIFFERENCE each of these ways deserve the same credence conditional on R = 68, we get Pr(L \({\geq}\) R R = 68) = 1. But 68 was arbitrary in this little argument, so I can conclude \({\forall}\)r: Pr(L \({\geq}\) R R = r) = 1. This seems to imply that Pr(L \({\geq}\) R) = 1, especially since I know R takes some value or other, but now I’m just confused.
Morgan is right to be confused. She has not quite been led into inconsistency, because as she notes the last step, from \({\forall}\)l: Pr(L \({\geq}\) R L = l) = 0 to Pr(L \({\geq}\) R) = 0 is not forced. In fact, the claim that this is always a valid inferential step is equivalent to the principle of countable additivity, which we have already seen a proponent of INDIFFERENCE in all its variations must reject. But it would be a mistake to conclude from this that we just have a standoff. What Morgan’s case reveals is that accepting the indifference principles that Elga offers requires giving up on an intuitively plausible principle of inference. That principle says that if the probability of p conditional on any member of a partition is x, then the probability of p is x. If we think that principle of inference is prima facie more plausible than Elga’s principle of indifference, as I think we should, that is pretty good prima facie evidence that Elga’s principle is wrong.
The next three sections will be devoted to determining whether we can convert this persuasive argument into a knockdown argument (we cannot) and whether Elga’s arguments in favour of INDIFFERENCE do enough to overcome this prima facie argument that INDIFFERENCE is flawed (they do not). A concluding section notes how to redo this argument so it appeals only to potential rather than actual infinities.
6 Intermission
CHARYBDIS: I know how to make that argument stronger. Just get Evil to offer Morgan a bet on whether L \({\geq}\) R. Ask how much she’ll pay for a bet that pays €1 if L \({\geq}\) R and nothing otherwise. If she pays anything for it, tell her the value of L, whatever it is, and ask her if she’d like to sell that bet back for half what she paid for it. Since she now assigns probability zero to L \({\geq}\) R she’ll happily do that, and then she’ll have lost money. If she won’t pay anything for the bet to start with, offer her the reverse bet. She should pay €1 for that, and now apply the same tactics except tell her the value of R rather than L. Either way the stupid person will lose money.
SCYLLA: Very practical Charybdis, but we’re not sure it gets to the heart of the matter. Not sure. Well, let us say why rather than leaving it like that. For one thing, Morgan might not like playing dice with Evil, even if Evil is the source of her life. So she might have a maximum price of 0 for either bet.
CHARYBDIS: But then surely she’ll be turning down a sure win. I mean between the bets she has a sure gain of at least €1.
SCYLLA: And if she is offered both bets at once we’re sure she would take that gain, but as we heard your story she wasn’t.4
4 Compare the objection to Dutch Book arguments in Schick (1986).
CHARYBDIS: So does this mean her degree of belief in both R \({\geq}\) L and L \({\geq}\) R is 0?
SCYLLA: It might mean that, and of course some smart people have argued that that is coherent, much to the chagrin of your Bayesian friends we’re sure.5 But more likely it means that she just isn’t following the patterns of practical reasoning that you endorse.6 Also, we’re not so sure about the overall structure of the argument. We think your reasoning is as follows. Morgan ends up doing something silly, giving up money. (Well, we’re not sure that’s always silly, but let’s say it is here.) So something went wrong. So she has silly beliefs. That last step goes by fairly fast we think. From her making some mistake or other, we can only conclude that, well, she made some mistake or other, not that she made some particular mistake in the composition of her credences.7
5 For example, Shafer (1976).
6 Compare the state-dependent approach to decision-making discussed in Chambers and Quiggin (2000).
7 This point closely resembles an objection to Dutch Book reasoning made in Hájek (2005), though Scylla is much more sceptical about how much we can learn from these pragmatic arguments than Hájek is.
CHARYBDIS: What other mistake might she have made?
SCYLLA: There are many hidden premises in your chains of reasoning to conclusions about how Morgan should behave. For instance, she only values a €1 bet on L \({\geq}\) R at Pr(L \({\geq}\) R) if she knows she can’t buy that bet more cheaply elsewhere, or sell it for a larger price elsewhere. Even if those assumptions are true, Morgan may unreasonably believe they are false, and that might be her mistake.8 But even that isn’t our main concern. Our main concern is that you understate how bad Morgan’s position is.
8 Scylla’s reasoning here is based on Milne (1991), though of course Milne’s argument is much less condensed than that.
CHARYBDIS:What’s worse for a mortal than assured loss of money?
SCYLLA: Morgan is not a mortal any more, you know. And immortals we’re afraid are almost bound to lose money to clever enough tricksters. Indeed, a so-called Dutch Book can be made against any agent that (a) has an unbounded utility function and (b) is not overly opinionated, so there are still infinitely many ways the world could be consistent with their knowledge.9 That includes us, and you dear Charybdis. And yet we are not as irrational as that Morgan. I don’t think analogising her position to ours really strengthens the case that she is irrational.
9 This is proven in McGee (1999).
CHARYBDIS: Next you might say that making money off her, this undeserving immortal, is immoral.
SCYLLA: Perish the thoughts.
7 Risky Business?
There are two kinds of reasons to dislike indifference principles, both of them developed most extensively in Keynes (1921). The first, which we have been exploring a bit so far, is that such principles tend to lead to incoherence. The second is that such principles promote confusion between risk and uncertainty.
Often we do not know exactly what the world is like. But not all kinds of ignorance are alike. Sometimes, our ignorance is like that of a roulette player facing a fair wheel about to be spun. She knows not what will happen, but she can provide good reasons for assigning equal credence to each of the 37 possible outcomes of the spin. Loosely following Frank Knight (1921), we will say that a proposition like The ball lands in slot number 18 is risky. The distinguishing feature of such propositions is that we do not know whether they are true or false, but we have good reason to assign a particular probability to their truth. Other propositions, like say the proposition that there will be a nuclear attack on an American city this century, are quite unlike this. We do not know whether they are true, and we aren’t really in a position to assign anything like a precise numerical probability to their truth. Again following Knight, we will say such propositions are uncertain. In (1937) Keynes described a number of other examples that nicely capture the distinction being drawn here.
By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed. (Keynes 1937, 114–15)
Note that the distinction between risky and uncertain propositions is not the distinction between propositions whose objective chance we know and those that we don’t. This identification would fail twice over. First, as Keynes notes, whether a proposition is risky or uncertain is a matter of degree, but whether we know something is, I presume, not a matter of degree.10 Second, there are risky propositions with an unknown chance. Assume that our roulette player turns away from the table at a crucial moment, and misses the ball landing in a particular slot. Now the chance that it lands in slot 18 is 1 (if it did so land) or 0 (otherwise), and she does not know which. Yet typically, the proposition The ball lands in slot 18 is still risky for her, for she has no reason to change her attitude towards the proposition that it did land in slot 18.
10 Though see Hetherington (2001) for an argument to the contrary.
My primary theoretical objection to INDIFFERENCE is that the propositions it purports to provide guidance on are really uncertain, but it treats them as risky. Once we acknowledge the risk/uncertainty distinction, it is natural to think that our default state is uncertainty. Getting to a position where we can legitimately treat a proposition as risky is a cognitive achievement. Traditional indifference principles fail because they trivialise this achievement. An extreme version of such a principle says we can justify assigning a particular numerical probability, 0.5, to propositions merely on the basis of ignorance of any evidence telling for or against it. This might not be an issue to those who think that “probability is a measure of your ignorance.” (Poole, Mackworth, and Goebel 1998, 348) But to those of us who think probability is the very guide to life, such a position is unacceptable. It seems to violate the platitude ‘garbage in, garbage out’ since it takes ignorance as input, and produces a guide to life as output. INDIFFERENCE is more subtle than these traditional indifference principles, but this theoretical objection remains. The evidence that O’Leary or Morgan or Leslie has does not warrant treating propositions about their location or identity as risky rather than uncertain. When they must make decisions that turn on their identity or location, this ignorance provides little or no guidance, not a well-sharpened guide to action.
In this section I argue that treating these propositions as uncertain lets us avoid the traps that Morgan falls into. In the next section I argue that the case Elga takes to support INDIFFERENCE says nothing to the theorist who thinks that the INDIFFERENCE principle conflates risk and uncertainty. In fact, some features of that case seem to support the claim that the propositions covered by INDIFFERENCE are uncertain, not risky.
In (1921), Keynes put forward a theory of probability that was designed to respect the distinction between risky propositions and uncertain propositions. He allowed that some propositions, the risky ones and the ones known to be true or false, had a numerical probability (relative to a body of evidence) while other propositions have non-numerical probabilities. Sometimes numerical and non-numerical probabilities can be compared, sometimes they cannot. Arithmetic operations are all assumed to be defined over both numerical and non-numerical probabilities. As Ramsey (1926) pointed out, in Keynes’s system it is hard to know what \({\alpha}\) + \({\beta}\) is supposed to mean when \({\alpha}\) and \({\beta}\) are non-numerical probabilities, and it is not even clear that ‘+’ still means addition in the sense we are used to.
One popular modern view of probability can help Keynes out here. Following Ramsey, many people came to the view that the credal states of a rational agent could be represented by a probability function, that function being intuitively the function from propositions into the agent’s degree of belief in that proposition. In the last thirty years, there has been a lot of research on the theory that says we should represent rational credal states not by a single probability function, but by a set of such probability functions. Within philosophy, the most important works on this theory are by Henry Kyburg (1974), Isaac (Levi 1974, 1980), Richard Jeffrey (1983) and Bas Fraassen (1990). What is important here about this theory is that many distinctive features of Keynes’s theory are reflected in it.
Let S be the set of probability functions representing the credal states of a rational agent. Then for each proposition p we can define a set S(p) = {Pr(p): Pr \({\in}\) S}. That is, S(p) is the set of values that Pr(p) takes for Pr being a probability function in S. We will assume here that S(p) is an interval. (See the earlier works cited for the arguments in favour of this assumption.) When p is risky, S(p) will be a singleton, the singleton of the number we have compelling reason to say is the probability of p. When p is a little uncertain, S(p) will be a fairly narrow interval. When it is very uncertain, S(p) will be a wide interval, perhaps as wide as [0, 1]. We say that p is more probable than q iff for all Pr in S, Pr(p) > Pr(q), and as probable as q iff for Pr in S, Pr(p) = Pr(q). This leaves open the possibility that Keynes explicitly left open, that for some uncertain proposition p and some risky proposition q, it might be the case that they are not equally probable, but neither is one more probable than the other. Finally, we assume that when an agent whose credal states are represented by S updates by learning evidence e, her new credal states are updated by conditionalising each of the probability functions in S on e. So we can sensibly talk about S(p e), the set {Pr(p e): Pr \({\in}\) S}, and this represents her credal states on learning e.
(It is an interesting historical question just how much the theory sketched here agrees with the philosophical motivations of Keynes’s theory. One may think that the agreement is very close. If we take Keynes’s entire book to be a contextual definition of his non-numerical probabilities, a reading encouraged by Lewis (1970), then we should conclude he was talking about sets like this, with numerical probabilities being singleton sets.)
This gives us the resources to provide good advice to Morgan. Pick a monotone increasing function f from integers to [0, 1] such that as n \({\rightarrow}\) \({\infty}\), f(n) \({\rightarrow}\) 1. It won’t really matter which function you pick, though different choices of f might make the following story more plausible. Say that S(L \({\geq}\) R L = l) = [0, f(l)]. The rough idea is that if L is small, then it is quite improbable that L \({\geq}\) R, although this is a little uncertain. As l gets larger, L \({\geq}\) R gets more and more uncertain. The overall effect is that we simply do not know what S(L \({\geq}\) R) will look like after conditionalising on the value of L, so we cannot apply the kind of reasoning Morgan uses to now come to some conclusions about the probability of L \({\geq}\) R.
If we view the situations described by INDIFFERENCE as involving uncertainty rather than risk, this is exactly what we should expect. And note that in so doing, we need not undermine the symmetry intuition that lies behind INDIFFERENCE. Assume that F and G are similar predicaments, and I know that I am either F or G. INDIFFERENCE says I should assign equal probability to each, so S(I am F) = S(I am G) = {0.5}. But once we’ve seen how attractive non-numerical probabilities can be, we should conclude that all symmetry gives us is that S(I am F) = S(I am G), which can be satisfied if each is [0.4, 0.6], or [0.2, 0.8] or even [0, 1]. (I think that for O’Leary, for example, S(It is 1 o’clock) should be a set somehow like this.) Since I would not be assigning equal credence to I am F and I am G if I satisfied symmetry using non-numerical probabilities, so I will violate INDIFFERENCE without treating the propositions asymmetrically. Such a symmetric violation of INDIFFERENCE has much to recommend it. It avoids the incoherence that INDIFFERENCE leads to in Morgan’s case. And it avoids saying that ignorance about our identity can be a sharp guide to life.11
11 Bradley Monton (2002) discusses using sets of probability functions to solve another problem proposed by Elga, the Sleeping Beauty problem (Elga 2000). Monton notes that if Beauty’s credence in The coin landed heads is [0, 0.5] when she wakes up on Monday, then she doesn’t violate van Fraassen’s General Reflection Principle (Fraassen 1995). (I assume here familiarity with the Sleeping Beauty problem.) Monton has some criticisms of this move, in particular the consequences it has for updating, that don’t seem to carry across to the proposal sketched here. But his discussion is noteworthy as a use of this approach to uncertainty as a way to solve problems to do with similar predicaments.
A referee noted that the intuitive characterisation here doesn’t quite capture the idea that we should treat similar predicaments alike. The requirement that if F and G are similar then S(I am F) = S(I am G) does not imply that there will be a symmetric treatment of F and G within S if there are more than two similar predicaments. What we need is the following condition. Let T be any set of similar predicaments, g any isomorphism from T onto itself, and Pr any probability function in S. Then there exists a Pr′ in S such that for all A in T, Pr(A) = Pr′(g(A)). When there are only two similar predicaments A and B this is equivalent to the requirement that S(A) = S(B), but in the general case it is a much stricter requirement. Still, it is a much weaker constraint than INDIFFERENCE, and not vulnerable to the criticisms of INDIFFERENCE set out here.
8 Boyfriend in a Coma
Elga argues for INDIFFERENCE by arguing it holds in a special case, and then arguing that the special case is effectively arbitrary, so if it holds there it holds everywhere. The second step is correct, so we must look seriously at the first step. Elga’s conclusions about the special case, DUPLICATION, eventually rest on treating an uncertain proposition as risky.
- DUPLICATION
- After Al goes to sleep researchers create a duplicate of him in a duplicate environment. The next morning, Al and the duplicate awaken in subjectively indistinguishable states.
Assume (in all these cases) that before Al goes to sleep he knows the relevant facts of the case. In that case INDIFFERENCE12 dictates that when Al wakes up his credence in I am Al should be 0.5. Elga argues this dictate is appropriate by considering a pair of related cases.
12 As with earlier cases, strictly speaking we need C-INDIFFERENCE and P-INDIFFERENCE to draw the conclusions suggested unless Al is somehow certain about all other propositions. I will ignore that complication here, and in .
- TOSS-and-DUPLICATION
- After Al goes to sleep, researchers toss a coin that has a 10% chance of landing heads. Then (regardless of the toss outcome) they duplicate Al. The next morning, Al and the duplicate awaken in subjectively indistinguishable states.
Elga notes, correctly, that the same epistemic norms apply to Al on waking in DUPLICATION as in TOSS-and-DUPLICATION. So if we can show that when Al wakes in TOSS-and-DUPLICATION his credence in I am Al should be 0.5, that too will suffice to prove INDIFFERENCE correct in this case. The argument for that claim has three premises. (I’ve slightly relabeled the premises for ease of expression.)
- (1)
- Pr(H) = 0.1
- (2)
- Pr(H (H \({\wedge}\) A) \({\vee}\) (T \({\wedge}\) A)) = 0.1
- (3)
- Pr(H (H \({\wedge}\) A) \({\vee}\) (T \({\wedge}\) D)) = 0.1
Here Pr is the function from de se propositions to Al’s degree of belief in them, H = The coin lands heads, T = The coin lands tails, A = I am Al and D = I am Al’s duplicate. From (1), (2) and (3) and the assumption that Pr is a probability function it follows that Pr(A) = 0.5, as required. This inference goes through even in the Keynesian theory that distinguishes risk from uncertainty. Premise (1) is uncontroversial, but both (2) and (3) look dubious. Since the argument for (3) would, if successful, support (2), I’ll focus, as Elga does, on (3). The argument for it turns on another case.
- COMA
- As in TOSS-and-DUPLICATION, the experimenters toss a coin and duplicate Al. But the following morning, the experimenters ensure that only one person wakes up: If the coin lands heads, they allow Al to wake up (and put the duplicate into a coma); if the coin lands tails, they allow the duplicate to wake up (and put Al into a coma).
(It’s important that no one comes out of this coma, so assume that the victim gets strangled.)
Elga then argues for the following two claims. If in COMA Al gets lucky and pulls through, his credence in H should be 0.1, as it was before he entered the dream world. Al’s credence in H in COMA should be the same as his conditional credence in H should be the same as his conditional credence in H given (H \({\wedge}\) A) \({\vee}\) (T \({\wedge}\) D) in TOSS-and-DUPLICATION. The second premise looks right, so the interest is on what happens in COMA. Elga argues as follows (notation slightly changed):
Before Al was put to sleep, he was sure that the chance of the coin landing heads was 10%, and his credence in H should have accorded with this chance: it too should have been 10%. When he wakes up, his epistemic situation with respect to the coin is just the same as it was before he went to sleep. He has neither gained nor lost information relevant to the toss outcome. So his degree of belief in H should continue to accord with the chance of H at the time of the toss. In other words, his degree of belief in H should continue to be 10%.
And this, I think, is entirely mistaken. Al has no evidence that his evidence is relevant to H, but absence of evidence is not evidence of absence. Four considerations support this conclusion.
First, Al gets some evidence of some kind or other on waking. Certain colours are seen, certain pains and sensations are sensed, certain fleeting thoughts fleet across his mind. Before he sleeps Al doesn’t knows what these shall be. Maybe he thinks of the money supply, maybe of his girlfriend, maybe of his heroine, maybe of kidneys. He doesn’t know that the occurrence of these thoughts is probabilistically independent of his being Al rather than Dup, so he does not know they are probabilistically independent of H. So perhaps he need not retain the credence in H he has before he was drugged. Even if this evidence looks like junk, we can’t rule out that it has some force.
Secondly, the kind of internalism about evidence needed to support Elga’s position is remarkably strong. (This is where the concerns raised in Section 3 become most pressing.) Elga notes that he sets himself against both an extreme externalist position that says that Al’s memories and/or perceptions entail that he is Al and against an “intermediate view, according to which Al’s beliefs about the setup only partially undermine his memories of being Al. According to such a view, when Al wakes up his credence in H ought to be slightly higher than 10%.” But matters are worse than that. Elga must also reject an even weaker view that says that Al might not know whether externalism about evidence is true, so he does not know whether his credence in H should change. My view is more sympathetic to that position. When Al wakes, he does not know which direction is credences should move, or indeed whether there is such a direction, so his credence in H should be a spread of values including 0.1.
Thirdly, Al’s position looks like cases where new evidence makes risky propositions uncertain. Mack’s betting strategy for the Gold Cup, a horse race with six entrants, is fairly simple. He rolls a fair die, and bets on whatever number comes up. Jane knows this is Mack’s strategy, but does not how the die landed this time. Nor does she know anything about horses, so the propositions Horse n wins the Gold Cup are uncertain for Jane for each n. Call these propositions wn, and the proposition that Mack’s die landed n dn. Right now, d2 is risky, but h2 is uncertain. Jane hears a party starting next door. Mack’s won. Jane has learned, inter alia, d2 \(\leftrightarrow\) h2. Now it seems that d2, Mack’s die landed 2, inherits the uncertainty of h2, Horse number 2 won the Gold Cup. The formal theory of uncertainty I sketched allows for this possibility. It is possible that there be p, e such that S(p) is a singleton, while S(p e) is a wide interval, in theory as wide as [0, 1]. This is what happens in Jane’s case, and it looks like it happens in Al’s case too. H used to be risky, but when he wakes he comes to learn H \({\leftrightarrow}\) A, just as Jane learned d2 \(\leftrightarrow\) h2. In each case, the left-hand clause of the biconditional inherits the uncertainty of the right-hand clause.
Finally, H being uncertain for Al when he wakes in COMA is consistent with the intuition that Al has no reason to change his credences in H in one direction or another when he says goodbye to his duplicate. (Or, for all he knows, to his source.) Perhaps externalist theories of evidence provide some reason to raise these credences, as suggested above, but I do not rely on such theories. What I deny is that the absence of a reason to move one way or the other is a reason to stay put. Al’s credence in H might change in a way that reflects the fact H is now uncertain, just like A is in COMA, just like A is in TOSS-and-DUPLICATION, and, importantly, just like A is in DUPLICATION. I think the rest of Elga’s argument is right. DUPLICATION is a perfectly general case. In any such case, Al should be uncertain, in Keynes’s sense, whether he is the original or the duplicate.
9 Shooting Dice can be Dangerous
The good news, said Dr Evil, is that you are still mortal. Odysseus was not as upset as Dr Evil had expected. The bad news is that I’m thinking of torturing you. I’m going to roll this fair die, and if it lands 6 you will be tortured. If it does not, you will be (tentatively) released, and I’ll create two duplicates of you as you were when you entered this room, repeat this story to both them. Depending on another roll of this fair die, I will either torture them both, or create two duplicates of each of them, and repeat the process until I get to torture someone.13
13 Dr Evil’s plans create a situation similar to the well known ‘shooting room’ problem. For the best analysis of that problem see Bartha and Hitchcock (1999). Dr Evil has changed the numbers involved in the puzzle a little bit to make the subsequent calculations a little more straightforward. He’s not very good at arithmetic you see.
Odysseus thought through this for a bit. So I might be a duplicate you’ve just created, he said. I might not be Odysseus.
You might not be, said Dr Evil, although so as to avoid confusion if you’re not him I’ll use his name for you.
What happens if the die never lands 6, asked Odysseus. I’ve seen some odd runs of chance in my time.
I wouldn’t be so sure of that, said Dr Evil. Anyway, that’s why I said I would tentatively release you. I’ll make the die rolls and subsequent duplication quicker and quicker so we’ll get through the infinite number of rolls in a finite amount of time. If we get that far I’ll just bring everyone back and torture you all. Aren’t I fair?
Fairness wasn’t on Odysseus’s mind though. He was trying to figure out how likely it was that he would be tortured. He was also a little concerned about how likely it was that he was the original Odysseus, and if he was not whether Penelope too had been duplicated. As it turns out, his torturous computations would assist with the second question, though not the third. Two thoughts crossed his mind.
I will be tortured if that die lands 6, which has a chance of 1 in 6, or if it never lands 6 again, which has a chance of 0. So the chance of my being tortured is 1 in 6. I have no inadmissible evidence, so the probability I should assign to torture is 1 in 6.
Let’s think about how many Odysseuses there are in the history of the world. Either there is 1, in which case I’m him, and I shall be tortured. Or there are 3, in which case two of them shall be tortured, so the probability that I shall be tortured is 2 in 3. Or there are 7, in which case four of them shall be tortured, so the probability that I shall be tortured is 4 in 7. And so on, it seems like the probability that I shall be tortured approaches 1 in 2 from above as the number of Odysseuses approaches infinity. Except, of course, in the case where it reaches infinity, when it is again certain that I shall be tortured. So it looks like the probability that I will be tortured is above 1 in 2. But I just concluded it is 1 in 6. Where did I go wrong?
In his second thought, Odysseus appeals frequently to INDIFFERENCE. He then appeals to something like the conglomerability principle that tripped up Morgan. The principle Odysseus uses is a little stronger than the principle Morgan used. It says that if there is a partition and conditional on each member of the partition, the probability of p is greater than x, then the probability of p is greater than x. As we noted, this principle cannot be accepted in its full generality by one who rejects countable additivity. And one who accepts INDIFFERENCE must reject countable additivity. So where Odysseus goes wrong is in appealing to this inference principle after previously adopting an indifference principle inconsistent with it.
This does not mean the case has no interest. Morgan’s case showed that when we have an actual infinity of duplicates, INDIFFERENCE can lead to counterintuitive results, and that the best way out might be to say that Morgan faced a situation of uncertainty, not one of risk. But it might have been thought that something special about Morgan’s case, that she has infinitely many duplicates, might be responsible for the problems here. So it may be hoped that INDIFFERENCE can at least be accepted in more everyday cases. Odysseus shows that hope is in vain. All we need is the merest possibility of there being infinitely many duplicates, here a possibility with zero probability, to create a failure of conglomerability. This suggests that the problems with INDIFFERENCE run relatively deep.
The details of how Odysseus’s case plays out given INDIFFERENCE are also interesting, especially to those readers not convinced by my refutation of INDIFFERENCE. For their benefit, I will close with a few observations about how the case plays out.
As in Morgan’s case, we can produce two different partitions of the possibility space that seem to support different conclusions about Odysseus’s prospects. Assume for convenience that Dr Evil makes a serial number for each Odysseus he makes, the Homeric hero being number 1, the first two duplicates being 2 and 3, and so on. Let N stand for the number of our hero, M for the number of Odysseuses that are made, and T for the property of being tortured. Then given INDIFFERENCE it behoves Odysseus to have his credences governed by the following Pr function.
- (4a)
- \({\forall}\)k Pr(T M = 2k - 1) = 2k-1/(2k - 1)
- (4b)
- Pr(T M = \({\infty}\)) = 1
- (5)
- \({\forall}\)n Pr(T N = n) = 1/6
Between 4a and 4b we cover all possible values for M, and in every case Pr(T) is greater than 1/2. More interesting are Odysseus’s calculations about whether he is the Homeric hero, i.e. about whether N = 1. Consider first a special case of this, what the value of Pr(N = 1 N < 8) is. At first glance, it might seem that this should be 1/7, because there are seven possible values for N less than 8. But this is too quick. There are really eleven possibilities to be considered.
F1: N = 1 and M = 1 | F2: N = 1 and M = 3 | F5: N = 1 and M > 3 |
F3: N = 2 and M = 3 | F6: N = 2 and M > 3 | |
F4: N = 3 and M = 3 | F7: N = 3 and M > 3 | |
F8: N = 4 and M > 3 | ||
F9: N = 5 and M > 3 | ||
F10: N = 6 and M > 3 | ||
F11: N = 7 and M > 3 |
By INDIFFERENCE, each of the properties in each column should be given equal probability. So we have
\[ \begin{aligned} x &= Pr(F_1 | N < 8) \\ y &= Pr(F_2 | N < 8) = Pr(F_3 | N < 8) = Pr(F_4 | N < 8) \\ z &= Pr(F_5 | N < 8) = \dots = Pr(F_11 | N < 8) \end{aligned} \]
We just have to solve for x, y and z. By the Principal Principle we get
Pr(M = 1 N = 1) = 1/6
\({\therefore}\) x = (x + y + z) / 6Pr(M = 3 N = 1 and M \({\geq}\) 3) = 1/6
\({\therefore}\) y = (y + z) / 6
And since these 11 possibilities are all the possibilities for N < 8, we have
- x + 3y + 7z = 1
Solving for all these, we get x = 3/98, y = 5/196 and z = 25/196, so Pr(N = 1 N < 8) = x + y + z = 9/49. More generally, we have the following (the proof of this is omitted): \[Pr(N = 1 | N < 2^{k+1}) = \frac{6^k}{\sum_{i=0}^{k}6^i10^{k-i}}\]
Since the RHS \({\rightarrow}\) 0 as k \({\rightarrow}\) \({\infty}\), Pr(N = 1) = 0. Our Odysseus is probably not the real hero. Similar reasoning shows that Pr(N = n) = 0 for all n. So we have another violation of countable additivity. But we do not have, as in Morgan’s case, a constant distribution across the natural numbers. In a sense, this distribution is still weighted towards the bottom, since for any n > 1, Pr(N = 1 N = 1 \({\vee}\) N = n) > 1/2. Of course, I don’t think INDIFFERENCE is true, so these facts about what Odysseus’s credence function will look like under INDIFFERENCE are of purely mathematical interest to me. But it might be possible that someone more enamoured of INDIFFERENCE can use this ‘unbalanced’ distribution to explain some of the distinctive features of the odd position that Odysseus is in.14
14 Thanks to Jamie Dreier, Adam Elga and an anonymous referee for helpful discussions about this paper and suggestions for improvements.
References
Citation
@misc{weatherson2005,
author = {Weatherson, Brian},
title = {Should {We} {Respond} to {Evil} {With} {Indifference?}},
volume = {70},
number = {3},
pages = {613-635},
date = {2005-05-01},
url = {https://brian.weatherson.org/quarto-papers/posts/evil/should-we-respond-to-evil-with-indifference.html},
doi = {10.1111/j.1933-1592.2005.tb00417.x},
langid = {en},
abstract = {In a recent article, Adam Elga outlines a strategy for
“Defeating Dr Evil with Self-Locating Belief”. The strategy relies
on an indifference principle that is not up to the task. In general,
there are two things to dislike about indifference principles:
adopting one normally means confusing risk for uncertainty, and they
tend to lead to incoherent views in some “paradoxical” situations. I
argue that both kinds of objection can be levelled against Elga’s
indifference principle. There are also some difficulties with the
concept of evidence that Elga uses, and these create further
difficulties for the principle.}
}