Comments on Alexander Wentzell’s “An Argument for Mentalism About Epistemic Justification”

epistemology
Author
Affiliation

University of Michigan

Published

April 11, 2025

Abstract

Comments for a session next week at the APA Pacific.

Alexander Wentzell’s nice paper provides a reason for preferring mentalism, the doctrine that what’s rational to believe supervenes on mental states, to accessibilism, the doctrine that what’s rational to believe supervenes on which states are accessible. The argument is that there are intuitively good guesses which turn on inaccessible states, like in inattentional blindness experiments. And if information which is received but not accessible can be the basis for rational guessing, it can be the basis for rational believing. I’ll make three brief comments, one about the last conditional, one about whether this argument generalises further than Alexander might want it to, and one about the notion of a ‘good guess’.

First, I think there is space for the accessibilist to concede the point about guesses, but deny that it is relevant to belief. Belief is, as Pamela Hieronymi and Jane Friedman have stressed, a settling attitude. To believe p well, one must not only have considerably more evidence for p than for ¬p, one must have reason to regard the question of p as settled. In typical cases, and we might want to fuss about the atypical cases but let them slide for now, it means having reason to close off inquiry into p, and not go looking for more evidence one way or the other. Could inaccessible information be a reason to close inquiry in this way? I’m not an accessibilist, so I’m not the best judge, but it feels there is something to work with here.

Set that aside, and let’s go with the idea that Alex’s argument works, and indeed let’s take on board the nice idea that thinking about the rationality of guesses is a good way to get novel insights into the rationality of beliefs. Alexander argues that this means mental states which are inaccessible are relevant to rationality. What I want us to think about for a bit is whether the same argument shows that non-mental states which are inaccessible are also relevant to rationality.

The first thing to note is that humans have ways of taking in, and responding to, information in ways that do not involve what we normally think of as mental states. I touch a hot stove (maybe the one labeled Trade War with China), and pretty quickly pull my hand away. Why? Well, I get the information that it’s hot, and (as I understand the science) that information leads to action without involving my brain. The signal from the hand maybe makes it as far as my spinal column when the return message Stop doing that you idiot comes flying back down. We can respond to information not just before its accessible, but before (I’d say) it’s even cognised.

This might be relevant to real world cases. Expert firefighters know to leave a building when the floor is spongy. What makes this an expert skill is that in any burning building the floor is a bit spongy; the experts apparently have the ability to tell the difference between what’s normal (or at least normal sub building on fire) and a sign the building is about to collapse. And, at least as I understand it, experts aren’t very good at explaining to novices exactly what they are tracking. This is possibly because part of the information they are getting, mostly through their feet, is inaccessible.

Question, one which I think turns on both empirical and conceptual questions. Is it possible that an expert firefighter could rationally guess the building is about to collapse on the basis of information from their feet that is not just inaccessible, it hasn’t even been cognised? If so, we’d have a case where Alex’s kind of argument didn’t just undermine accessibilism, it undermined mentalism.

Maybe you think that’s far fetched, or maybe that these signals are mental in the relevant sense. So let’s try one other example. I see an F, and immediately infer it’s a G. I infer this because I have a hard-wired disposition to make such inferences, and this is hard-wired because it’s an adaptation; in fact around here historically, Fs have been G. Here there is a mental state - that thing is F - but what grounds the rationality of the guess that it’s G is not just my mental state, but the evolutionary history, and the fact that Fs around here are G. My twin earth counterpart where Fs are not Gs would be irrational making the same guess. Again, it feels like there are problems for mentalism here.

The big picture is one I’ve long worried about. Mentalism looks like an unstable resting point between accessibilism and externalism. Once we’ve rejected accessibilism, and said inaccessible mental states matter to rationality, we may as well say that inaccessible bodily states, or inaccessible evolutionary history, or, as the reliabilist says, inaccessible environmental correlations, are also relevant.

OK, last point. What’s a good guess? I think there are two distinct notions here that are worth teasing out. I think Alex’s argument goes through with either understanding, so I’ve left this to last. But I think it’s possible relevant to the broader literature he’s placing his paper in.

Imagine a third guesser about Arenado. This person says that Arenado isn’t young, so we should discount our guesses because of his aging. How much discount? Well he’s not that old, so let’s say 5%. That gives us a guess of 28.5.

Question: Could that be a better guess than 30?

Answer 1: No, of course not. Good guesses have some probability of being true. And Arenado will 100% definitely not hit 28.5 home runs.

Answer 2: Sure. What a good guess does is minimise something like expected distance from the truth. And possibly the guess that minimises that is 28.5.

I think in ordinary talk we use both notions - probability maximising and expected mistake minimising. If anything I think the second is a bit more common, but I’m not sure. The philosophical question I have no idea about is whether there’s a fact of the matter about which notion ‘good guess’ normally picks out, or whether there’s just an ambiguity here. I guess that this is just an ambiguity, and there are two notions we should be careful not to confuse. As I said, I think Alex’s argument works on either understanding. I’m a little less sure that’s true for some of the arguments he alludes to in the paper involving guessing.

Thanks again to Alexander for a really interesting paper, and I’m looking forward to the Q&A.