Thursday, 3 July 2014

The Expectancy Paradox







One scientist has a theory that people will countenance - or even commit - cruelty if it's approved by some kind of authority figure. That theory intuitively has a lot of plausibility. However, that's not the point of the paradox.

The said scientist carried out an experiment on ten people (i.e., subjects).

The experiment is to see if the ten subjects will press a button which will deliver an electric shock if ordered to do so by an authority figure (given a suitably rational or “scientific” reason to do so).

However, unbeknownst to him, it's the scientist who's the real subject of the experiment. That is, his ten subjects aren't being tested – he is! In fact the ten subjects know what's going on and are given fake electrical shocks. The scientist, on the other hand, doesn't know what's going on. He only knows about his own experiment; not the experiment upon him.

Again, this paradox isn't about the nature of meta-tests or even about why - or if - people really do commit acts of cruelty when told to do so by authority figures. This paradox is actually about the “fudge factor” or “experimenter bias effect". In other words, the scientist is testing his research students and these meta-testers are testing the scientist who's testing research students.


The phrases “fudge factor” and “experimenter bias effect” actually refer to the fact that when a researcher or scientist expects to “discover” or find a certain result, he's very likely to get that result. (Needless to say, when the research or science involves anything which is in any way political in nature, this is even more likely to be the case.)

In Professor Smith's case, his “experimenter bias” is leading him to expect (or want) his subjects to "turn Nazi” and be all too keen to press the electric shock button. It ends up that most of them do so.

Professor Arse, on the other hand, is expecting (or wanting) his subjects not to turn Nazi by being all too willing to press the electric shock button. Yes, most of the don't!

In other words, each scientist has set up the experiment to get the results he wants (for political, psychological, scientific, etc. reasons). In other words, both scientists are fudging - if at an unconscious level.

This bias (or fudging) is brought about in this case by Professor Smith shouting at his students/subjects to press the button. In other words, he wants them to press the button in order to “prove” his theory. Professor Arse, on the other hand, whispers the command to press the button because he doesn't really want them to.

It's said that neither scientist is aware of what he's doing. However, what they are doing is bringing about a self-fulfilling prophesy. (Despite that, I find it hard to accept that they literally know nothing about what they're doing because if that were truly the case, then they wouldn't be doing what they're doing, surely.)

In any case, these experimental and political biases of the two professors were themselves the subject of an experiment by other (meta) scientists. Yes, you guessed it: if the bias of the two scientists is proven or demonstrated by these meta-scientists, and they've also concluded that such bias is widespread (even) in science, then what of themselves? Are they also biased? Indeed was this experiment itself, on other scientists, a perfect example of the “experimenter bias effect” or the “fudge factor”? Are these meta-scientists as biased (or even more baised) than the scientists they were experimenting upon? And if not, why not?

The point is: just as the object-scientists were trying to do tests on people's reactions to the orders of authority figures; so the meta-scientists were testing the object-scientists in order to elicit the reality or nature of scientific bias. In other words, are these meta-scientists being biased about the nature or reality of scientific bias?

This problem can be put in its logical form:
i) The meta-scientists concluded that much - or all - scientific research or testing involves at least some element of bias.
ii) This piece of meta-science is also a piece or scientific research and testing. Therefore it must have contained at least some element of bias.

Though does scientific bias also mean scientific invalidity? If it does, then this piece of meta-science is scientifically invalid: just as invalid as the tests on the button-pressing subjects.

In addition, if scientific bias means scientific invalidity, does that also mean that scientists, and laypersons, have no reason to believe a word of what these meta-scientists have to say about their meta-scientific test or experiment? The problem is: intuitively both the object-test and the meta-test must contain at least some truth or accuracy! Indeed they may well contain a lot of truth or accuracy. This paradox seems to lead to the result that both tests should be rejected. Yet surely that can't be the case!

The other conclusion is that we must simply learn to live with a certain degree of scientific bias; just as we may well do in all other areas of life. Sure, if the bias is spotted, then rectify it. However, it is both misguided and even illogical to expect zero bias. In fact we may even say that it's unscientific to expect (or want) zero bias in science (or anywhere else for that matter).

William Poundstone suggests making a distinction between falsehood and invalidity. He seems to mean that even if bias is a fact: it's still the case that what the test or theory says in the end is simply either true or false. Or, as he puts it,

“if the experiment is merely invalid (through careless procedure, lack of controls, etc.), its result may be true or false”. (130)

Bias (or invalid procedures) may still lead to truth (just as true conclusion can be the result of bad reasoning or bad science (or no science)).

You can ask, however, that if there is bias (or an invalid procedure,) how could it possibly lead to truth? And if it did lead to truth, surely it would only do so accidentally or coincidentally.

For example, the Pythagoreans believed that the earth was a rotating globe. Though they believed that for all the wrong reasons.

However, we're talking about scientific experiments here, not Pythagorean speculation or philosophy. In a scientific sense, if an invalid procedure or bias leads to truth, that would be of little interest to science or scientists. Indeed typical scientists (if they exist) would be hard-pressed to make sense of an invalid procedure or bias leading to scientific truth. Though perhaps either they too are biased or their stance on science is philosophically naïve.
Poundstone offers two logical alternatives to this paradox.
i) We can assume that the meta-study is valid. (That is, what leads up to the results and the result itself are valid and sound/true.)
ii) Yet if it's true, then all (other) studies/tests are invalid and unsound.
iii) And if all tests are invalid, then this meta-test is also invalid and so can't produce a true result.

In other words, we're lead to the conclusion that we can't see its conclusion as true – even if we wanted to.
Alternatively:

i) We can assume that the meta-test is invalid.
ii) However, even though the meta-test can be seen as invalid, its result can still be seen as true.
iii) And, by inference, if the object-test can also be seen as invalid like the meta-test, then its results can also be seen as true (like the meta-test).

Here, then, we have two cases of invalid procedures leading to true results. Though, as I said earlier, what's the point of having scientifically true results (or theories) alongside scientifically invalid or unsound procedures/experiments/tests? And as I also said, scientists will surely claim that you can't have the mutual pair of invalid or biased procedures/tests alongside true results.... or can you?

No comments:

Post a Comment