Critical
rationalism
I cut my teeth on this approach because I went to LSE in the 1960s and
met the key critical rationalist -- Popper (see some links like this
one on falisification, or this
more general and pretty bland one on the Great Man, or this
brief essay on the basics). Even more impressive was his sidekick
Gellner (a social philosopher of very broad scope -- see the Gellner homepage
-- but my favourite work is the hilarious demolition of linguistic
philosophy Words and Things
-- I have a brief extract here).
These two had a reputation for being really spiky critics. We can
borrow some of the techniques for our much more
modest task.
Critical rationalists like plain speaking, simple argument and above
all a clear commitment to being capable of being wrong. Wearing your
critical rationalist hat, you will insist that complex argument is
translated into much more simple terms (or do this yourself of
course). If you do not understand some sentences it is NOT because you
are dense but because THEY have not formulated their views
clearly! Above
all you will want to ask yourself the key question about
any statements -- are they scientific or non-scientific? If they are
scientific they will actually be involved in making some concrete
prediction
about social events, now or in the future. This prediction will be
risky -- in other words, evidence might come in that show it is wrong
(that it is falsifiable in the jargon). There is no shame in being
proved wrong. A calm objective scientific outlook will be equally
interested in the wrong result as in a right one -- because you can
learn from wrong results, think about where you went wrong, and try
again.
However, far too many spokespersons, experts, researchers and policy
makers do not want to risk being proved wrong and will do much to avoid
it. So they cover up. They
choose deliberately vague or obscure ways of saying what they believe,
quite often by shrouding everything in appalling jargon, pious hopes or
various pathetic appeals for us to trust them They have all sorts of
secondary explanations up their sleeves just in case things do go
wrong, and they can explain away any failures or problems. For example,
things have changed in ways beyond their control, they haven't received
enough support, we haven't given it enough time, all it takes is belief
in ourselves, God moves in a mysterious way, it is we critics who are
to blame and why are we being so negative.
A critical analyst must not let them get away with this. They take
their sceptical eye to any general statements and ask what exactly is
being stated or predicted here, behind all the rhetoric. They demand
evidence. They try to think of a way in which statements could be put
to the test, and ideally proved to be wrong.
EXAMPLE: A sports development policy-maker is trying to drum up support
for public investment in hosting a mega-event. There is a lot of
flag-waving and appeals to our patriotism, and also a lot of apalling
management-speak about 'investment matrices', 'contingency management',
'drivers', 'double-loop learning' [is that still fashionable?] and
'rolling out micro-managed studies at the stakeholder interface'. Our
critical rationalist
patiently reads all this stuff looking for some concrete statements
about benefits to be gained and might even helpfully suggest some nice
specific tests:
how exactly are 'members of the community' to benefit -- will their
life expectancy increase?
who exactly are 'members of the community' for that matter?
what would count as 'success' in sporting terms -- more medals per
entrant?
how could we be sure 'national prestige' had increased -- more
visitors to Britain?
Any veterans of methods courses could then
proceed to ask suitably
specific questions about samples (random? stratified random? sample
biases? response rates?, controlling variables and test design to
eliminate 'confounding' or 'spurious corelation', tests of significance
if any, whether the data can be used to test theories, whether any raw
data is included so we can check the interpretive processes) and so on.
In general, critical rationalists believe that the role of the critical
public is crucial, which is why material must be made publicly
available -- not just finished results but as much information as
possible about the empirical work including the design of the research
and its actual conduct (including things like how researchers
and/or coders were trained) . The very useful BMJ checklist for health
evaluation studies here might
be generally applied, although we have to remember that the preferred
design in much health research is a very strict one -- the random
controlled trial
Any approach that suggests we must just trust the researcher is
suspect. It is not that we do not trust researchers, but that no-one
can fully control their own impliciit biases and preferences. Although
critical rationalism would not always choose a 'scientific',
quantitative design, it would expect a lot of disclosure from other
approaches like ethnography
contents page
|