Wednesday, 23 July 2014
Knowledge, Justification & the Lottery Paradox
"The greater the number of losing tickers, the better is your justification for believing you will lose. Yet there is no number great enough to transform your fallible opinion into knowledge – after all, you just might win. No justification is good enough – or none short of a watertight deductive argument…" (David Lewis, 504)
This scenario displays the violent disjuncture between knowledge and justification.
Just think about it.
Say that there are a billion tickets. It's still possible, both logically and epistemically, that you could win the jackpot. However, you would be profoundly justified in believing that you would loose. But you wouldn't have knowledge that you will loose because you may, after all, win.
So why not say that you will probably loose, rather than that you will loose? This introduction of probability still involves the notion of justification: that is, if it is highly probable that you will loose, then you are justified in that belief. However, it doesn’t make the absolute claim that you will loose become a case of knowledge. Despite that qualification, introducing probability into the equation still doesn't introduce an example of knowledge.
So it's not just a case that justification in this particular example about tickets isn't good enough. It's the case that no “justification [would be] good enough”. There is no limit condition in which justification suddenly turns into knowledge. Knowledge and justification, in this scenario - and perhaps in all scenarios - are torn asunder. In this example, knowledge isn't adequately justified opinion or belief. Knowledge and justification are two different kinds.
So instead of thinking that adequate justification will give us knowledge, perhaps we should, in some cases defined by context, simply throw justification overboard. Or, as Lewis puts it, “justification is not always necessary” (504).
David Lewis gives his own examples of knowledge that doesn't require justification. For example, he asks:
“What (non-circular) argument supports our reliance on perception, on memory, and on testimony?” (504)
Our use of perception in the past justifies our use of perception in the present. Yet, of course, this is a circular argument. That is, we are using past instances of perception to justify our use of perception in the present. Though if we hadn’t justified past uses of perception, then present uses wouldn’t be justified either. It could have been the case, of course, that the past uses of perception were indeed justified; though only by past past uses of perception. And if that’s the case, we have a non-infinite regress on our hands. A regress that perhaps goes back to our initial reliance on perceptions which were not, of course, justified.
Exactly the same remarks hold for our reliance on memory and on testimony.
Past uses of memory have proved to either correctly represent past actualities or to have helped us cope with present actualities and exigencies. Though how did we know that those past uses of memory were in fact correct representations of past actualities? Was it because we thought the same about past past uses of memory?
Lewis gives another interesting example of unjustified knowledge. (Putting it that way makes such knowledge sound odd.)
He argues that we know things even though “we don’t even know how we know” (504). That is, it could be the case that we don’t actually have “supporting arguments” to justify our knowledge-claim – we've forgotten them. This means that at one time we had supporting arguments for P; though now we’ve forgotten them.
So P was justified at time t; though it isn't justified at time t1 (the present). But if P was justified at time t (that’s if it was adequately justified), it needn't be justified again at t1. This may be a question of time constraints or epistemic common sense. That is, if we had to re-justify all our bits of knowledge again and again, then we wouldn’t have time to acquire new bits of knowledge or even function in the world.
For example, I firmly believe that Adolph Hitler was the dictator of Germany in the 1930s. Indeed I believe that this belief is in fact a knowledge-claim. However, I haven’t justified this particular belief for some considerable time (perhaps for years). Perhaps other beliefs of mine are dependent upon - or related to - this belief about Hitler. That is, I have derived other beliefs or bits of knowledge from this initial belief or bit of knowledge. However, if I need to continually re-justify my initial belief or beliefs, then perhaps I wouldn’t get around to justifying the beliefs that are dependent upon - or related to - or derived from - the initial set of beliefs (which were, after all, justified at one time).
Perhaps this is either an argument for some kind of foundationalism or some kind of coherentism. It is foundationalist in the loose sense that I rely on certain beliefs not being continually re-justified in order to make way for new beliefs. However, the initial set of beliefs were in fact fully justified at one point, so they will not come under the rubric of self-evident or unjustified foundations.
However, the argument can also be deemed coherentist in that new beliefs also depend upon - or are related to - or are derived from - initial sets of beliefs; which may, in turn, depend on new beliefs and other sets of initial or older beliefs. Indeed the initial set of beliefs may depend upon other sets of initial beliefs and also on new beliefs; just as much as they depend on these older sets of beliefs. This is a clear coherentist picture of the inter-relations between the whole set of beliefs, which itself includes sets of beliefs and individual beliefs. In this whole there are no genuinely foundational beliefs that take the weight of all the non-foundational beliefs above them. It is a coherent system of mutual support without a pyramidical structure.