Brian Weatherson


Opening of Knowledge: A Human Interest Story
Knowledge is sensitive to interests.
Two people alike in all other relevant respects, but with different interests, have different knowledge.
Let p be the following proposition:
We all know that p is true.
https://en.wikipedia.org/wiki/List_of_the_busiest_airports_in_Europe
Ansgar is offered the following bet, take it or leave it.
Some say this kind of case shows that knowledge is particularly sensitive to the practical interests of the (would-be) knower.
I reject this for two reasons.
The other conclusion that’s sometimes drawn from cases like this one is that knowledge goes away in high stakes cases.
This is not right for two reasons.
Willian is playing a game show on TV. One of the games has the following rules.
Two plus two equals four.
Frankfurt am Main airport is the busiest airport in Germany.
It’s the odds, not the stakes that matter.
Choosing to play Blue-True over Red-True is a risk with no upside, so it’s an infinitely long odds bet.
I’ll describe these more on later slides.
This is the view I support.
Ansgar and Willian lose knowledge when they are offered these bets.
The sceptic agrees with the conclusion of the argument that Ansgar and Willian don’t know that p.
They reject interest-relativity because they say that even in normal situations, they don’t have knowledge either.
I think this view is implausible, but I’ll put off arguments for it until the historical section of the talk.
By epistemicist, I mean here the combination of these three views:
This view is committed to saying that Ansgar and Willian are rational in taking the bet, and playing Blue-True.
That’s not particularly intuitive, but the best epistemicists have an interesting story about why intuition disagrees with their position here.
The intuitions are correct intuitions about actors, confused for false claims about actions.
So here’s the key thought.
I actually think this is a not implausible thing to say about Ansgar.
Perhaps maximising expected value, even when the stakes are so high, is a sign of bad character.
But I think it’s not a very plausible thing to say about Willian. That’s the better case for interest-relativity.
The more common view, at least in contemporary epistemology, is that we have to give up natural connections between knowledge and action.
In the book I go over several ways to make that last bullet point sound even more implausible.
But I don’t think many people think it’s naturally where you’d want to end up.
Rather, they think that having knowledge be interest-sensitive is so absurd that it’s worth dealing with this cost.
In the book I have two broad kinds of move to this last point.
A quick tour through four stops:
Here’s an anti-sceptical argument you can find in the Nyāya philosophers.
If scepticism is true, what is the difference between rational and irrational action?

This is a challenge that sceptics have taken up.
Philo, one of the last sceptical heads of The Academy, worried a lot about it.

His solution was to say that while we cannot know things, we can rationally accept some things, and rational action is based on rational acceptance.
Now we could talk about how, and whether, that is meant to work with scepticism.
But I want to note one other thing about Philo’s view.
He thought that how much evidence you needed for rational acceptance was dependent on how much was at stake.
He had an interest-relative view of rational accepance.
A very simple way of getting an interest-relative view would be to start with Philo and add two things.
That’s very close to my view.


Leaping ahead 1500 years, medieval ethicists were worried about a somewhat practical problem.
Imagine that a judge is faced with a murder defendent, and the evidence of guilt is overwhelming.
The judge finds the defendent guilty, and the defendent is executed. But, sadly, he was innocent.
Is the judge a murderer? Has the judge sinned?
The response was to develop a concept of moral certainty.
The judge has not sinned if given the evidence, it is morally certain that the defendent is guilty.
This isn’t meant to be a triviality. In high stakes cases, it takes much more to be morally certain than in low stakes cases.
This isn’t an interest relative account of knowledge.
The whole point is that we’re thinking of standards for beliefs that turn out to be false.
But it’s very close.
What’s common to the last two cases is this.
I’ll end with a conjecture.
Contemporary orthodoxy in epistemology is largely about finding standards that satisfy three requirements.
It’s only with the rise of scientific journals that the search for a standard meeting these three conditions becomes important.

Pasnau suggests that in the story about how 1 and 2 became important, we should give special attention to John Wilkins.

Pasnau suggests that in the story about how 1 and 2 became important, we should give special attention to John Wilkins.
It’s easy to come up with circumstances where we have philosophical uses for non-maximal standards.
We’re trying to judge whether people are taking rational precautions about lions (Philo), or whether judges are being moral (medievals).
In both cases, having standards be sensitive to the practical situation seems sensible.
When could we want a standard that is non-maximal, but where the particular practical situation is irrelevant?
Answer: When we’re publishing a scientific journal.
Three constraints on publishing a journal.
My conjecture, and it is just a conjecture, is that this is the real founding of epistemology as we know it today.
It’s the task of coming up with standards like Is this acceptable to publish in a scientific journal?.
That’s an interesting question, but it’s not the only question we might be interested in.
In general, I think Philo and the medievals were on the right track.
Knowledge is to be used, and the standards for knowledge should be sensitive to what it is used for.
That’s fundamentally why we should have an interest-relative theory of knowledge.
And hopefully through this book I’ll convince some people of that!

