A Problem for Interest-Relative Epistemology
Here’s a problem for interest-relative epistemology, one that has been raised by Adam Zweber, and by Charity Anderson and John Hawthorne.
Assume that Chooser has a number of options, each of which could be Good or Bad. If they are Good, Chooser gets 1; if they are Bad, Chooser gets 0. Each option has a tiny probability e of being Bad, but is almost certainly Good. If Chooser selects at random, they get an expected return of 1-e, and they are indifferent between the options. Conditional on a particular option being Good, it has (certain) return of 1, which is greater than 1-e. So their preferences conditional on that option being Good are not the same as their unconditional preferences. So they do not know that the option is Good.
That was pretty abstract, let’s make it a bit more concrete. Chooser is making pasta for dinner and needs a tin of tomatoes. They go to their local store, find the shelf with their favorite brand of tinned tomatoes and … Well actually that’s our question, what do they do next. To a first approximation, they face the decision from the previous paragraph. Each tin might be Good or Bad, and each has a truly miniscule chance of being Bad. A slightly better model would have a more continuous variation between the Good and the Bad, but that wouldn’t change the analysis, and this will be enough.
Both Zweber and Anderson and Hawthorne try to use this case to show that interest-relative theories lead to closure failure. Their idea is that conditionalising on every tin being Good doesn’t change preferences, so according to interest-relative theories Chooser knows that every tin is Good, but for each tin, they do not know that tin is Good. That would be a very bad closure failure, but I don’t think this part of their argument goes through. It’s only a very simple version of interest-relative theories that fall into this trap, and there are lots of ways to avoid it.
But the original problem is really bad. If people don’t know that tins of tomatoes are Good when they are doing the groceries, that’s a really bad sceptical result. We should try to avoid it. And the aim of chapter 6 of my knowledge manuscript is to avoid it.
How to do the Groceries
The solution here is to take a step back. The argument presupposes that the right decision theory here is expected utility maximisation. Now that’s definitely a very popular theory of decision. Indeed, I use it most of the time when I’m doing decision theory. But it’s also a very idealised theory, and we should think a little about the idealisations.
Here’s an argument that utility maximisation is not the right decision theory for doing the groceries.
- When buying tinned products, it is typically common for there to be many tins which it is rationally permissible to select.
- If the right decision theory is expected utility maximisation, then there are only ever many tins which it is rationally permissible to select if there are many tins that have exactly the same expected utility.
- It is rare that there are many tins that have exactly the same expected utility.
- Therefore expected utility maximisation is not the right decision theory.
Premise 1 seems like just an obvious fact about grocery shopping. There are so many tins. The right thing to do is to pick one. Maybe not one that is hard to reach, or one that is clearly damaged. But, you know, just pick. It’s a long shopping list and a short time until the kiddo needs to be collected from school, don’t overthink this.
Premise 2 is true by the definition of utility maximisation.
So the work is all on premise 3. And here it’s worth reflecting on what it means for two choices to have exactly the same expected utility. Any differential evidence here could make a difference. If it takes a fraction more energy to reach one tin rather than another, that’s a bad choice. If one tin has a fractionally torn label, or an ever so slightly faded label that indicates it has been in the store longer, that’s some evidence that it is Bad. So it can’t be chosen.
This is nonsense. You should just pick. And you should just pick for an obvious reason. Calculating the energy costs of moving your arm to get this tin rather than that tin costs more time and energy than the difference between the tins. The fading of the labels only makes a difference to the Good/Bad probability in the 10th or 20th decimal place. It’s not worth your time to bother.
The right decision theory to use in grocery shopping is not standard expected utility theory, but a theory that is sensitive to computational costs. You need to use non-ideal decision theory, since you’re a non-ideal decider. (I assume a limited readership.) And according to non-ideal decision theory, miniscule differences between the cans, like minor label tears or fading, should simply be ignored. The model of the world one should work with is that cans that look basically Good are in fact Good, not that they are Good with probability 0.9999….9, for some hard to determine number of 9’s.
Back to Epistemology
If we approach the groceries using non-ideal decision theory, the problem for interest-relative theories goes away. Unconditionally, Chooser thinks all the cans are Good. That is, the model Chooser uses for decision making takes as a fixed point that the cans are Good. Conditional on a particular can being Good, nothing changes. Chooser is still indifferent between the cans.
So the problem for interest-relative theories is solved.
But you might worry that the problem will come back for people who, unlike us, can in fact use ideal decision theory. To think through this case, it helps to consider people who can better approximate the decision-theoretic ideal: traders backed by immensely powerful computers.1 Won’t they be in the same position as the ideal Chooser?
1 Note that these are approximations to the decision-theoretic ideal and not, among other things, the moral ideal.
In their case, the sceptical result is not problematic. Imagine that Chooser has a bunch of possible government bonds they can buy. Each bond could be Good or Bad. If they are Good, Trader gets a nice return, say 6%. If they are Bad, Trader loses 100%. Trader is trying to invest a billion dollars, so there is something riding on this.
Trader goes to the computer, and the computer says that each bond has a one in a million chance of being Bad. That’s really low, similar to the chance of a tin of food being Bad. But unlike Chooser in the supermarket, Trader should actually consider this. After all, the possible Badness of a bond reduces the expected gain by $1000. Even for high-paid Trader, getting another $1000 in expected gain is worth a few minutes work.
So Trader tells the computer to investigate more thoroughly, i.e., to run an analysis of the bonds that takes even the computer a few minutes to run. When Trader comes back with their coffee, the computer has decided that one of the bonds has only a one in a billion chance of being Bad. Trader realises that means it has a $999 higher expected return, so they take it.
If Trader knows all the bonds are Good, after all it’s only a freak one in a million shot that they wouldn’t be, this doesn’t make sense. Why pass up a bond you know is Good for one that that you think is somehow more likely to be Good? That sounds incoherent.
That’s all to say, if Trader (or Chooser) has the ability to accurately measure probabilities to many decimal places, and there is so much at stake that it is worthwhile to exercise that ability, it makes sense to say that they don’t know which things are Good without a lot of evidence. So the intuitions that Zweber, and that Anderson and Hawthorne draw on, don’t stick in situations like Traders.
For Much More Detail
I go over this argument in much more depth in chapter 6 of Knowledge: A Human Interest Story. This is I think the most novel chapter of the book. In my earlier work on interest-relative epistemology, I’d taken for granted that sensible decision makers maximise expected utility. That, however, isn’t true, unless the decision makers are not just sensible, but super-human. Knowledge is for humans, not super-humans. The right theory of knowledge for humans should integrate with the right theory of decision for humans. That theory says to ignore things that it isn’t worthwhile thinking about, given that thinking is costly. Once we take that into account, the interest-relative theory gives the right results.