Reading
Guide to: Kuhn, T (1977) The
Essential Tension, Chicago:
University of Chicago Press.
NB there are some intriguing
examples of current (C21) scientific
disputes discussed in Barad,
Kirby or Schrader
This book follows the more famous Structure
of Scientific Revolutions, which is
rich in historical example and detail. This
is a more reflective commentary looking back
on the success of that earlier book, and
taking on some criticisms. It offers a more
considered and thoughtful 'theoretical'
commentary on the important concepts, such
as paradigms.
I have left out a number of themes,
including the constant an interesting
parallels between the history of science as
it is depicted here, and Piaget's theories
of learning
Preface
We need to study science hermeneutically,
for example to reconstruct Aristotle's
context and culture in order to locate his
thoughts and to get at his whole system,
rather than trying to analyze particular
concepts. Progress in science is made from a
process of gestalt switch. There are some
possible connections with the social milieu,
but science is 'relatively insulated' from
that milieu. Scientific communities should
be the locus of research, and their
connections with wider systems of education
and communication.
Scientific revolutions occur in thought,
usually preceded by some anomaly. Scientific
anomalies can be explained, but, often, the
explanation has implications itself which
make other parts of theory problematic.
Revolutions however are rare. More common is
'normal science', which involves mopping up
problems, consolidating the position of
previous work, and sometimes preparing the
ground for new thought too (for example by
whittling away areas that are undecided
allowing anomalies to emerge, and sometimes
even producing anomalies directly).
Normal science consists of 'puzzle
solving', and this leads us to the concept
of paradigm. A paradigm consists of basic
concepts (such as 'force'), but these are
never defined explicitly. Instead, they are
demonstrated by teaching various standard
examples, which avoids speculating about
difficult and debatable definitions.
Paradigms therefore are best seen as the
standard examples which produce solutions to
similarly conceived problems. Paradigms can
be expanded through the publication of books
into a whole world view of a scientific
community. Science is therefore a group
product, and groups are held together by
value systems. These value systems need not
be total, explicit, or even free of
conflict, and they're not simply
deterministic. The commitments to normal
science are acquired through the use of
specialist language and the application of
paradigms to nature: the incommensurability
of paradigms is best seen as a translation
problem.
Chapter 2
This has a particularly good discussion on
the subjective elements of causal
explanations, although I have not recorded
detailed notes on it here.
Chapter 7
There are problems in establishing exactly
when discoveries are made, and the usual
approach is to identify individuals as
responsible. This is naively
individualistic. The argument is best
developed by looking at discoveries which
appear to be novel, that is not predictable
in advance by existing theory. Take the
example of the discovery of oxygen:
(a) Bayen produced a gas by heating the red
precipitate of mercury. He called this gas
'fixed air', and it was carbon dioxide.
(b) Later, Priestley performed the same
experiment, and this time noticed that
objects will burn in the gas that was
released, and he called this 'nitrous air'.
It was nitrous oxide.
(c) Priestley told Lavoisier of his
results, and he repeated the experiment
which led to more tests of the gases that
were released, and the discovery of 'pure
air' .
(d) Priestley repeated the experiments
himself and discovered 'dephlogisticated
air'.
(e) This led to more work by Lavoisier, who
eventually concentrated the gas (oxygen) as
a separate component of air.
At which stage was oxygen actually
discovered? At this stage of obtaining a
pure sample? But Priestley did that at stage
(d), although he wrongly identified what he
had got. Even Lavoisier, who had the right
identification, had not got this theory
developed by this time, so there was a gap
between his observations and the existing
theoretical explanations. A similar story
can be told about the discovery of Uranus.
The object was originally spotted by
Herschel, but wrongly identified as a comet
(and before that as a star). Only when it
failed to behave as a comet was it
identified as a planet. X-rays offer similar
complexities: their effects were seen before
Roentgen, and his contribution was to see
these effects as a result of a new kind of
radiation.
What these brief accounts all show is:
(1) There has to be an experimental
isolation of an anomaly
(2) This is usually accidental, and has
often happened before as well, sometimes as
a by-product of standard work
(3) Some individual recognizes the event as
an anomaly specifically
(4) Various instruments and concepts have
been developed to such a state that a
violation of expectations becomes noticeable
(5) The anomaly leads to further
conceptualisation, a pursuit of the problem
(6) The discovery is unexpected and react
back on existing theory. Thus Herschel's
discovery prompted new investigations and
disrupted the old view of the solar system,
permitting further discoveries. Roentgen's
discovery led to rethinking the old cathode
ray experiments which had failed to control
the effects of X-rays, the development of
new instruments, and thus the discovery of
more new types of radiation. The discovery
of oxygen leads not only to the discovery of
new gases but to a new theory of combustion
[in his earlier work, Kuhn points out that
the old theory of combustion was by no means
immediately displaced, however. The debate
between phlogiston and oxygen theories of
combustion lasted more less until the death
of Priestley].
Chapter 8
Measurement in science has an important
role, but historical investigation shows
some interesting points:
(a) Measurements never agree exactly with
predictions. Instead, there is an acceptable
range of error, although this changes. Thus
quite large discrepancies between
observations and theory were acceptable for
the Ptolemaic school, but were concerned as
signs of a crisis for Copernicus.
(b) Measurement is an important part of
experimental technique, especially important
for the mopping-up operations of normal
science. However, it is not always easy to
resolve disputes or extend theories by
measurement -- some implications of the work
of both Newton and Einstein were very
difficult to actualize and measure. In these
cases, scientists tend to offer instead of
precise measurement: a belief in the
theory's potential; acceptable
approximations; examples of one successful
case of experiment (one observation of
bending light waves served to buttress faith
in Einstein).
(c) As precision in measurement develops,
anomalies are more likely to arise -- but,
as the examples above show, failure in
measurement does not usually constitute an
anomaly itself.
(d) Measurement does not offer an
independent test of theories, since
measurement is guided by theory. Effective
measurement is often defined as enabling
some correspondence between observations and
predictions to occur. Any discrepancies
usually lead to new measurement techniques
rather than new theories. Thus 'application'
of theory very rarely leads to new theories,
since experimental data often arises from
theory in the ways described. Further,
sometimes different theories can fit the
same data (and so data cannot be used to
decide between theories).
(e) Measurement is useful where there is a
theoretical crisis, where anomalies arise,
but many inconvenient measurements are
bypassed and seen as error. This is not a
purely rational process -- interesting
'errors' might be pursued if the effects are
outside the normal range, or if the data is
repeatable, or even if the results intrigue
the experimenter. Sometimes, such data can
be seen as an error which will be resolved
by better techniques, sometimes a
theoretical system can persist despite an
anomaly -- thus the erratic orbit of Mercury
was an anomaly for Newtonian physics for
years. Thus anomalies do not arise only
through measurement: sometimes a new
technique or a new instrument will throw up
incompatible data, but this must be seen as
somehow contrary in a fundamental way for
the background theory if it is to develop
into an anomaly.
(f) There needs to be new theoretical work
as well, and this always arises from some
subjective recognition of faults with the
old theory. A crisis therefore may lead to
new empirical discoveries as prerequisites
for theoretical ones. However, the crisis is
usually fought off. Quantitative anomalies
are the hardest to resist, since qualitative
problems can be easily fudged -- for
example, phlogiston is supposed to escape
from metals when they are heated, but some
metals apparently gained weight; the
phlogiston men simply invented various ad
hoc hypotheses such as that phlogiston must
have negative weight; effective quantitative
techniques were able to show how much weight
had been gained and from where.
However, anomalies alone are insufficient,
and there must be a new theory available to
replace the old one. Measurement is
therefore best in helping to choose between
theories, and accuracy is often preferred
even to qualitative loss (an example of such
loss is provided by Newton's theory of
gravity which abandoned the question of the
origins of gravity). Overall, quantification
is very useful in processes of professional
verification and in the problem of selection
of theories, but it arises only with
considerable efforts to theorise and in
connection with other factors affecting
choice.
In an appendix to this chapter, Kuhn
discusses social sciences. Briefly, there
can only be effective and productive
anomalies if there is an organized set of
expectations and some prior theoretical
consensus. Natural sciences meet these
conditions, but social sciences do not:
there is still substantial disagreement
about techniques and paradigms [and thus it
is very unlikely that decisive enough
anomalies can arise to help us move from one
'perspective' to another].
Chapter 9
Convergent thinking plays a crucial role in
the development of natural science --
divergent thinking is only useful if there
is some theoretical crisis. There are
extended periods of normal science which
require convergent thinking. A great
scientist has to converge, and commit
themselves to normal science -- and be
prepared to consider revolutionary and
divergent thinking as well. The same goes
for scientific communities. Convergent
thinking is encouraged and the way which
science is taught, in the development of
'mental sets' [which is usually one area in
which science teaching has been condemned by
progressives -- but Kuhn sees the importance
of convergent thinking]. Convergent thinking
can produce innovations. Indeed, progress
proceeds through a set of alternative
consensuses, not by holding a series of
competing approaches at the same time.
Convergent thinking helps the scientific
community focus on problems. There must be
some agreed background theory when
approaching a new field. Further, no one
investigates puzzles unless they are sure
initially that they can be solved by current
theory. Theory guides us to non-trivial
anomalies.
Of course, agreed theories must be
potentially flexible. Kuhn shifts to examine
the 'personality characteristics' of
revolutionary thinkers at this point. In an
appendix to this chapter, he also considers
the role of 'external social forces' in
assisting scientific convergence [not very
well -- he hints that these forces supply
problems for the scientific community, but
gives no details]. He seems to be arguing
that basic science, or the selection of mere
theoretical puzzles, is often inadequate to
solve these external problems. This failure
to deliver provides some social space for
innovators, who are classically marginal to
scientific communities anyway [a theme taken
up in the earlier book].
Chapter 11
This focuses on the debate between Kuhn and
one of his main rivals Popper. The two
thinkers have much in common, since both
denied that science makes progress by
accretion, both see a role for revolutionary
change, both stress the disparities between
observations and theory, and both emphasise
the role of scientific tradition. But Kuhn
says there are some differences too, over
matters such as the role of tradition, or
the role of falsification. There are also
more fundamental differences in the whole
gestalts of the two:
(1) Statements and hypotheses are most
commonly tested within normal science for
Kuhn, relying on scientific tradition, and
operating mostly as puzzle solving. For
Popper, the focus is on problems with the
whole tradition, what Kuhn would call
anomalies, and here what is tested is the
ingenuity of the individual great thinker
rather than a paradigm.
(2) Popper's revolutionary overthrows are
very rare for Kuhn, and only where there is
some prior crisis. They are not
characteristic of most science, and can only
take place in conditions of extensive normal
science.
(3) Normal science offers the criterion of
demarcation [between proper science and its
rivals, such as astrology], rather than
attempts at revolutionary falsification.
Normal science is essential for progress, as
we have seen, unlike premature sciences
where there are competing consensuses. For
Kuhn, you only get a science where critical
philosophical discourses have been
abandoned! Popper's criteria of theory
choice [roughly, that only science is
falsifiable, and that falsification leads to
revolutionary overthrow] are really criteria
to guide the choice of metaphysical systems
[that is, philosophical theories of science
rather than science itself].
(4) Testing is rarely decisive. Decisive
tests arise only after frequent unsuccessful
attempt to resolve puzzles as above, and
they must be rooted in a background
tradition. Thus astrology is not a science,
but not because it is so vague as to be
unfalsifiable (Popper's position), but
rather because its failures of prediction
have no theoretical implications: because
there are so many uncontrollable and
difficult to determine variables, such as
the need to ascertain the exact date of
birth, the theory can always survive a lack
of successful prediction. In this way, the
explanation of the failure of prediction is
actually quite 'scientific' in astrology,
and there is a parallel with the way in
which 'measurement problems' are sometimes
used to explain the lack of predictive power
in natural sciences. Astrology, like
psychoanalysis and medicine, is unscientific
because it is a craft, based on a general
theory which lends plausibility, but which
is not precise enough to reveal anomalies.
Indeed there are no puzzles either, hence no
research and therefore no science. As a
result, Popper's demarcation criterion needs
revision. A science has conclusions which
are logically derivable from shared
premises, but in a specific form rather than
a general one. Even here though, the
specific form is only a sufficient
condition: other sciences have precise and
specific conclusions even if they are not
logically derivable.
(5) Theories are often replaced before
there is a decisive test. For example,
Copernican replaced Ptolemaic systems before
any decisive testing of the theory. It was
simply that the Ptolemaic system had ceased
to provide puzzles [like astrology].
(7) There is a logical asymmetry in
Popper's theory of falsification [Popper had
used this argument to defeat the inductivist
view of science, like Hume's
whereby evidence is sought to confirm
theories. Popper argues that no amount of
confirming incidents can logically prove a
theory, whereas one single significant case
of this confirmation can falsify it]. In
Kuhn's view, an anomaly arises where
something has failed to be accounted for by
a theory, but falsification or refutation
implies a formal, completely certain and
unaccountable compulsion based on assent.
Not so for anomalies, which show that
theories can be endlessly reformulated or
'ad hoced' in an attempt to fight off
negative implications. This is indeed
recognised by Popper, but he gives us no
guidelines to account for falsification
episodes. If we examine his work on
demarcation, we find a theory is scientific
if and only if it includes 'observation
statements' (especially those singular
existential statements which refer to the
simple presence or absence of objects).
These have to be deducible from the theory,
for Popper, not actual observations
themselves. If observation statements are
crucial to falsification, then logical
disproofs are also available [for example,
the scientific community can be asked to
agree whether or not a bending is detectable
in light].
However, Kuhn wants to question whether
scientific theories can ever be cast in this
form, and even if so whether this will
describe the actual logic of scientific
knowledge. Popper seems to want to talk of
knowledge as something empirical, so that
real observations can provide falsifiers,
but the problem remains -- when does logic
alone require the scientist to abandon the
hypothesis? Popper's work is really a
description of the ideology of science
rather than its practice, a description of
procedural maxims rather than actual
methodological rules.
There are problems with Popper's notion of
verisimilitude too. If theories are
formulated so that the logical consequence
is that they can be falsified, this assumes
that there is no revolutionary change in
background knowledge. In other words, this
implies a full articulation of scientific
knowledge, and definite, agreed rules of
application: in other words it implies
'normal science', indeed, normal science
without any puzzles.
(8) There are non-logical elements involved
in falsification, such as shared
recognitions and learned classifications.
These are often held implicitly, and
generalized explicitly only if there are
specific problems to be solved [coons
example here includes problems of teaching
science]. Falsification therefore involves a
theoretical and psychological decision to
infer problems of classification from an
observation. Scientists have to be prepared
to make a risky explicit definition, which
just does not square with the usual
pragmatic uses of theory. Scientific
criteria are defined with definite cases in
mind, and it is only when something
unexpected occurs to science do they become
problems. Even then, different consequences
arise from these problems, not guided by
logic alone.
(9) So how are we to explain change in
scientific theories? It is not a logical
process alone, although there is sometimes a
logical component, often subsequently. There
is no logical or rational theory choice. The
actual process is still largely unknown,
although there are hints that the evolution
of a profession might be an important
process. Scientists largely prefer their
gains yielded by quantitative solutions to
puzzles, although they sometimes do exclude
problems or leave them open. Is the
unification of theory a professional goal?
Explanatory power certainly does seem to be
a major goal, but explanation is
psychological and sociological, involving an
account of a value system or ideology, and
the development of institutions to transmit
it. Thus the approval of fellow scientists
is a major motivator for those trying to
solve puzzles [Kuhn suggests it is more
important than any actual practical outcome
-- see
Lyotard on the decline of the
'performativity criterion']. How group
unanimity is secured and maintained is the
main question.
Despite Popper's denials and his emphasis
on logic, science is 'subjective' after all.
Popper half recognizes this himself,
especially if we see his prescriptions as
moral imperatives for scientists rather than
as descriptions of actual procedures.
Chapter 12
This chapter contains some 'second thoughts
on paradigms'. The term was used in
different senses in the earlier
Structure of Scientific Revolutions,
mostly in two main senses -- to describe the
global commitments of the scientific group,
and to describe the commitments of subsets
within the overall group. The two were
confused, in terms of describing, for
example, the pre-paradigms stage (now the
different schools in the pre-paradigm stage
can have paradigms themselves). Scientific
communities need to be researched first, as
a series of social networks [instead of
formal analyses of paradigms]. Variability
will occur, largely according to the
practices of scientific specialties, the
extent of their shared goals, the state of
the education system, the existence of for
communication among members, extent of
consensus among them, and the importance of
shared references in classic texts. Such
commitment can take place at different
levels as well. In order to manage this
complexity, Kuhn suggests that we replace
the term paradigm with the term
'disciplinary matrix', and focus on selected
aspects of this matrix ( symbolic
generalizations, models and exemplars) :
Symbolic generalizations, which are very
important for logical and mathematical
operations, the existence or absence of
which can also confer prestige. These
generalizations may appear in different
guises -- for example as 'laws', more
usually as 'law - sketches'. In these cases,
empirical contents affect generalizations
from the beginning. Laws may be shaped by
notions of elegance, or tacit knowledge, and
these criteria are taught through
characteristic problem-solving exercises.
Such laws are usually 'attached to nature',
not by deploying some definite, logical
object language, but via operational rules
or through some notion of 'correspondence'.
[Kuhn agrees that scientists may share the
belief that somehow correct usage of these
rules delivers this correspondence, which is
more less what Adorno
says about the magical use of scientific
rituals]. The whole process is implicit, and
scientists have largely abandoned 'sense
datum language' for 'basic language' as a
form of operationalism. Although such
statements can be formalised, the basic
problem remains unanalysed -- how to basic
statements get attached to nature?
Are scientific statements like this really
tautologies or empirical statements? Laws or
definitions? The problems are glossed by
teaching science and by practising it: this
is how scientists simply 'recognise' some
correspondence between a theorem and some
actual examples, almost as a matter of
selective perception. Analogies are
involved. Indeed, using analogies can lead
to progress, as different analogies have
different implications -- thus Galileo saw
the motion of a pendulum as analogous to
that of a rolling ball. [If I remember the
discussion in the earlier book, pendula were
seen before Galileo as falling bodies]. Thus
'group - licenced resemblances' lie at the
heart of correspondences between theories
and nature. Examplars are the main device to
share these. Perceptions and gestalts come
first, and only then are explicit criteria
are referenced, if at all.
Data is usually seen as what is given to
experience. 'Basic statements' play an
important role, as we have seen [remember
that these are simple existential statements
-- that something exists]. However our
sensations as such are not immediate and
given. A good deal of processing of stimuli
is required, and there is no 1:1
relationship between stimuli and sensations.
Scientists have to learn procedures here,
and these are shared in scientific
communities. Kuhn accepts that this means
there are literally different worlds.
Learning takes place primarily from
ostentation [showing people what things
mean] and feedback, and is intended to help
people identify suitable discriminations. It
is these discriminations that lead to
different data. There are no definite
correspondence rules, although symbolic
generalizations are useful. Indeed, it can
be unhelpful to be too explicit about these
correspondence rules, since this exercise
may weaken the communities' cognition.
Boundary disputes are inevitable but are
usually dealt with by compromise rather than
logical choice. Fixed rules are unnecessary
and can be unhelpful, and what is better is
a recognition that learned perceptions of
similarity are more useful: this can allow
the exploration of analogies.
In conclusion to this chapter, we can now
dispense with the term 'paradigm'
altogether, as long as we agree that the
main feature of science is its peculiar
process of learning its procedures.
Chapter 13
This deals with objectivity, value
judgments and theory choice. Paradigm shift
is not simply a matter of logical proof but
a matter of argument and persuasion.
Progress takes place through collective
decisions to prefer one rather than another.
The exercise is rhetorical, but there are
some shared criteria of good theory:
(1) Accuracy is to be preferred, providing
some measure of agreement about deductions
and observations
(2) Consistency, both internally and
externally
(3) Broad explanatory power, so that a good
theory has many consequences
(4) Good theories are simple and unifying
(5) Good theories are fruitful
There are difficulties with each of these,
however, for example each criterion can be
vague and they may contradict with each
other, as could (1) and (3), for example.
Turning to actual examples, the Copernican
system was not more accurate than the
Ptolemaic one until Kepler was able to
devise more precise measurements. There are
different problems as well -- for example
oxygen theories give more accurate readings
of the weight relations in reactions, but
phlogiston theories give a more accurate
account of the similarities between metals
and the dissimilarities between ores. Both
Copernicus and Ptolemy were consistent, but
with different background knowledges, and
neither was really simpler in usual senses.
Choice is clearly influenced by other
criteria. For example, Kepler was influenced
by the neo-Platonic movement, Darwinism
became acceptable because it resonated with
nineteenth-century British social thought.
Personality factors are important too, such
as willingness to accept risk. These are
accepted by historians as important but
subsequently ignored by philosophers:
(a) Philosophers have a belief in the
development of a full logical algorithm for
theory choice, with the subjective elements
as variables that have not yet been
controlled. But this is an unattainable
ideal
(b) Subjective elements are acceptable as
affecting the discovery process, but are
seen as having nothing to do with subsequent
justifications, such as testing, which has
to be 'objective'. There are lots of
examples to the contrary however. Pedagogy
has a more important role than justification
procedures; crucial experiments take place
typically after the scientific community is
convinced; each decision has a context --
for example, some solutions are seen as more
powerful and are therefore preferred.
(c) Even if there is some sort of
algorithm, there are often different
possible candidates for theoretical
explanation. The increasing unanimity which
develops among scientists is not explained
solely by some emergent logical and
objective process of theory choice.
(d) The criteria cannot be simply universal
and compelling, or no puzzles or
disagreement would arise. Scientists operate
with vague norms or maxims rather than
rigorous criteria, and these are affected by
value systems. Social utility is often an
important consideration, for example.
Further, allowing disagreement is an
important professional value, and maybe the
only way to produce new research.
(e) Progress is as mysterious a process as
induction [another dig at Popper]. No one
has a full explanation. The vagueness of
scientific rules permit 'normal curve
interpretations' [that is, probabilistic
ones?]. This leads to an argument for the
growth of large scientific communities and
the toleration of individual freedom, at
least as minimal requirements. The social
milieu is again denied as an important
determinant.
Overall, the traditional criteria listed
above ( 1 to 5) might well be permanent, but
the value attached to them varies, and each
can assume different applications and
different weights. For example, accuracy now
offers a means of quantitative agreement in
all sciences, whereas once it was important
only in astronomy. Utility was much more
important in the past for chemists rather
than mathematicians. There has been there
gradual removal of questions of qualitative
accuracy (for example, the oxygen theory of
combustion abandoned any attempts to explain
the colour, texture or other qualities of
the substances involved).
Terms like 'subjective' and 'objective'
need clarification. They are clearly
interlinked, but we are accustomed to call
judgments 'subjective' if they involve
matters like mere aesthetics. The term
implies that judgments are non-discussable.
The interesting thing about scientific
judgments is that they include subjective
elements but they are usually also the focus
of considerable discussions.
Finally, there is inevitably partial
communication between the advocates of
different theories. Each advocate tends to
appeal to his own ground, and words function
differently in each system. As a result,
'choice' is often more like a conversion
than a logical decision. Results do help to
persuade, of course and investigation of
results is often a key way to develop
familiarity with the new language.
social theory page