Induction and Supposition

epistemology induction scepticism

An argument that we should not treat rules of inductive inference in ordinary life as being anything like the inference rules in natural deduction systems.

Brian Weatherson http://brian.weatherson.org (University of Michigan)https://umich.edu
May 1 2012

Here’s a fairly quick argument that there is contingent a priori knowledge. Assume there are some ampliative inference rules. Since the alternative appears to be inductive scepticism, this seems like a safe enough assumption. Such a rule will, since it is ampliative, licence some particular inference From \(A\) infer \(B\) where \(A\) does not entail \(B\). That’s just what it is for the rule to be ampliative. Now run that rule inside suppositional reasoning. In particular, first assume \(A\), then via this rule infer \(B\). Now do a step of \(\rightarrow\)-introduction, inferring \(A \rightarrow B\) and discharging the assumption \(A\). Since \(A\) does not entail \(B\), this will be contingent, and since it rests on a sound inference with no (undischarged) assumptions, it is a priori knowledge.

This argument is hardly new; John Hawthorne (2002) suggested a similar argument ten years ago. But it is a quick argument for a striking conclusion, and deserves close scrutiny. I’m going to argue that it fails because it falsely assumes that we can treat rules of ampliative inference like rules in a natural deduction system, and hence as rules that we can apply inside the scope of a supposition. That assumption has recently been defended by Stewart Cohen (2010) and Sinan Dogramaci (2010), but I’m going to argue, using a construction similar to one found in Dogramaci, that it leads to absurdity given other plausible premises.

Here’s the main argument. If any ampliative inference is justified, I think the following rule, called ‘IR,’ is justified, since this is a very weak form of an inductive inference.

IR

From There are infinitely many Fs, and at most one is not G and x is F infer x is G unless there is some \(H\) such that it is provable from the undischarged assumptions that x is F and H and There are finitely many things that are both F and H, and one of them is not G.

Note that the rule doesn’t say that merely one \(F \wedge \neg G\) has been observed; it requires that just one such thing exists. So this seems like a very plausible inference; it really is just making an inference within a known distribution, not outside it. And it is explicitly qualified to deal with defeaters. And yet even this rule, when applied inside the scope of suppositions, can lead to absurdity.

In the following proof, we’ll let \(N\) be the predicate ‘is a natural number,’ and \(P\) be the predicate ‘is the predecessor of,’ and I’ll appeal to the fact that there are infinitely many natural numbers, and each number has at most one predecessor. I’ll use a version of the proof system in E. J. Lemmon’s Beginning Logic, but it should be easy to transform the proof into any other proof system.

\[\begin{aligned} 1 && (1) && &Na && \text{assumption} \\ 2 && (2) && &Nb && \text{assumption} \\ 1, 2 && (3) && &\neg Pab && \text{(1), (2), IR} \\ 1 && (4) && &Nb \rightarrow \neg Pab && \text{(2), (3), CP} \\ 1 && (5) && &\forall y (Ny \rightarrow \neg Pay) && \text{(4), UI} \\ && (6) && &Na \rightarrow \forall y (Ny \rightarrow \neg Pay) && \text{(1), (5), CP} \\ && (7) && &\forall x (Nx \rightarrow \forall y (Ny \rightarrow \neg Pxy)) && \text{(6), UI} \\ && (8) && &N2 \rightarrow \forall y (Ny \rightarrow \neg P2y) && \text{(7), UE}\end{aligned}\] So we get the absurd result that if 2 is a number (which it is!), then it is the predecessor of no number. But that’s absurd, since obviously 3 is a number and 2 is the predecessor of it. Note that at step 3 we use rule IR with \(F\) being the predicate is a natural number, \(G\) being the predicate does not have a as a predecessor, and \(b\) being \(x\).

What could have gone wrong? I think the problem is using IR in the context of a suppositional proof, as we’ve done here. But let’s check if there is another guilty suspect.

If the problem is Conditional Proof (CP in Lemmon’s system), then that’s about as bad for the proof in the first paragraph that there are contingent a priori truths as if the problem is IR. Since we’re interested in whether that proof works, we won’t investigate this option further. In any case, if \(\rightarrow\) is material implication, that rule seems unobjectionable. A referee suggested that if we’ve used an ampliative rule earlier, then \(\rightarrow\) should be weaker than material implication, and under that interpretation (5) through (8) may be plausible. I think that claim is basically right, but note that if we do this the argument for contingent a priori knowledge with which I started will fail, since the contingency of \(A \supset B\) will not imply the contingency of \(A \rightarrow B\) if \(\rightarrow\) is weaker than \(\supset\).

It is hard to imagine that Universal Elimination (UE) is the problem. In any case, line (7) is obviously bad anyway, so something must have gone wrong in the proof before that.

Perhaps the problem is with Universal Introduction (UI); this is what Dogramaci suggests. One objection he offers is that although we can prove every instance of the universal quantifier, inferring the universal version creates an undue aggregation of risks. Even if line (4) is very probable, and it would still be probable if \(a\) were replaced with \(c\), \(d\) or any other name, it doesn’t follow that the universal at line (5) is very probable. But I think this is to confuse defeasible reasoning with probabilistic reasoning. The only way to implement this restriction on making inferences that aggregate risk would be to prevent us making any inference where the conclusion was less probable than the premises. That will rule out uses of \(\forall\)-introduction as at (5). But it will also rule \(\wedge\)-introduction, and indeed any other inference with more than one input step. To impose such a restriction would be to cripple natural deduction.

Another objection he offers (UI) is simply that it is the least plausible, or least intuitive, of the rules used here. But in fact (UI) is extremely intuitive. If we can prove every instance of a schema, we should be able to prove its universal closure. On the other hand, allowing ampliative rules to be used inside the scope of a supposition allows a quick proof of contingent a priori knowledge, as shown in the first paragraph. Now maybe there is such knowledge, but its existence is hardly intuitive.

So I conclude the weakest link in the argument is step (3). Although IR is a good rule, it can’t be used inside the scope of a supposition. And since IR is about as weak an inductive rule as we can imagine, I conclude that ampliative inference rules can’t in general be used inside the scope of suppositions.

The general lesson here is that, as was made clear many years ago by Gilbert Harman (1986) is that there is a difference between rules of inference and rules of implication. The quick proof that there’s contingent a priori knowledge uses a rule of inference as if it is a rule of implication. Not respecting this distinction between inference and implication leads to disaster, as we’ve shown here, and should be shunned.

Cohen, Stewart. 2010. “Bootstrapping, Defeasible Reasoning and a Priori Justification.” Philosophical Perspectives 24 (1): 141–59. https://doi.org/10.1111/j.1520-8583.2010.00188.x.
Dogramaci, Sinan. 2010. “Knowledge of Validity.” Noûs 44 (3): 403–32. https://doi.org/0.1111/j.1468-0068.2010.00746.x.
Harman, Gilbert. 1986. Change in View. Cambridge, MA: Bradford.
Hawthorne, John. 2002. “Deeply Contingent a Priori Knowledge.” Philosophy and Phenomenological Research 65 (2): 247–69. https://doi.org/10.1111/j.1933-1592.2002.tb00201.x.

References