Nozick on rules of rationality

Nature of Rationality (Princeton, 1993), p. 76:

If the rationality of a belief… is a function of the effectiveness of the process that produces and maintains it, then there is no guarantee that optimal processes will employ any rules that are appealing on their face. Those processes instead might involve scorekeeping competition among rival rules and procedures whose strengths are determined… by each rule's past history of participation in successful predictions and inferences. None of these rules or procedures need look reasonable on their face, but, constantly modified by feedback, they interact in tandem to produce results that meet the desired external criteria (such as truth). The theory of that process, then, would not be a small set of rules whose apparent reasonableness can be detected so that a person then could feasibly apply them but a compute program to simulate that very process. More radically, it is possible that no rules are symbolically represented, even in a weighted competition, but that any "rule" emerges as a regularity of behavior of a parallel distributed processing system whose matrix of weights determining activation of output or intermediate vectors is modified repeatedly by some error-correction rule. If the most effective processes for reaching cognitive goals are of these kinds, then the type of normative rules that philosophers have sought to formulate will not succeed in demarcating rationality of belief. They will not themselves be components of that process, and conscious application of them will not be the best route to true (or otherwise desirable) belief.

Nature of Rationality, pp. 77-8:

Suppose that the most reliable processes of arriving at belief involve such scorekeeping procedures and continual revision of weights; the only rules are ones that determine what enters the competition with what strengths and how these strengths get modified by the ascertained results of using the competition's winner… Even so, it might be said, the philosophers' principles have an illuminating function, namely, to describe the output of these feedback processes… Some principles may define cognitive goals and hence specify the target at which the processes are aimed, but the (extant) principles might not further describe that output any more illuminatingly. In advance, we will not know whether various philosophical principles (that do not just define the cognitive goals) will accurately describe the output of the most effective processes for achieving those goals.

For instance, consider the frequently proposed normative requirement that a person's body of beliefs be consistent and deductively closed… Perhaps the most effective procedures for arriving at a high ratio of truths (and relatively few falsehoods) will yield a set of beliefs that is inconsistent. Hence, if that high ratio of truths is to be maintained, the set of beliefs had better not be deductively closed. To require in advance that no rules be used for generating beliefs that are known to lead (in some circumstances) to inconsistency might prevent us from arriving at a very large number of true beliefs. When such rules are used, though, steps must be taken to limit the consequences of the inconsistencies that might arise. We may look for an acceptable way to avoid inconsistency, but in the meantime we will engage in damage control, isolating inconsistencies and taking steps not to infer any and every arbitrary statement from explicit contradictions. In everyday life we do this with equanimity — we happily acknowledge our fallibility and affirm that one of our beliefs, no doubt, is false. Science embodies a strong drive to attain consistency, but scientists too will bide their time, refusing to renounce the many exact predictions of a theory known to generate inconsistencies or impossible values (witness the use of "renormalization" in quantum-mechanical calculations.)

Nature of Rationality, pp. 89-91:

In the lottery paradox, we have the statement that one ticket among the million will win and the individual statement for each ticket that it will not win. Any two of these million and one statements are pairwise consistent — both can be true together — but not all of the million and one statements can hold true. These form an inconsistent set.

We have not required, though, that the total set of beliefs be consistent, merely that beliefs be pairwise consistent. If you want a very high ratio of true beliefs, believe each of the million and one statements; you will be right one million times. "But if the set is inconsistent, you know that you definitely will be wrong one time," someone will object. True, but in another case mightn't I be rational in choosing to have my beliefs formed by a process that I know will give me one false belief every million and one times, not as a matter of logic — these million and one beliefs all are consistent — but as a matter of fact? How are things changed, how is the desirability of following the belief-forming procedure changed, if the one error is guaranteed as a matter of logic, because the beliefs are inconsistent? To be sure, when I know the beliefs are inconsistent, I had better be sure not to use them all as premisses in an argument that might play upon the inconsistency; but this is a matter of isolating the results of the inconsistency.
If our beliefs may be inconsistent… how are we to isolate the damage such inconsistency might cause? It is well known that from an inconsistency any and all statements can be deduced by using standard logical rules of inference…

There are various devices one might use to avoid this escalation of belief. I suggest that for belief legitimately to be transferred from premisses to conclusion in a deductive inference not only must each premiss be believed but also the conjunction of the premisses must be believed. (Or, at least, the conjunction of the premisses must not be disbelieved.)