Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

A Companion to Philosophy in Australia and New Zealand

I


Idealism

Martin Davies & Stein Helgeby

The honour of being the first to teach philosophy in Australia belongs to the Congregationalist minister Barzillai Quaife (1798–1873), in the 1850s, but teaching philosophy did not formally begin until the 1880s, with the establishment of universities (Grave 1984).

Two approaches have dominated Western philosophy in Australia: Idealism and materialism. Idealism was prevalent between the 1880s and the 1930s, but dissipated thereafter. It was particularly associated with the work of the first professional philosophers in Australia, such as Henry Laurie (1837–1922), Francis Anderson (1858–1941), William Mitchell (1861–1962) (who rejected the label) and a second generation including W. R. Boyce Gibson (1869–1935). Idealism in Australia often reflected Kantian themes, together with the British, particularly Scottish, revival of interest in Hegel through the work of the ‘Absolute Idealists’ T. H. Green (1836–1882), F. H. Bradley (1846–1924) and Henry Jones (1852–1922), the latter of whom conducted a popular lecture tour of Australia (Boucher 1990).

A number of the early New Zealand philosophers, including Duncan MacGregor (1843–1906), William Salmond (1835–1917), and Francis W. Dunlop (1874–1932) were educated in the Idealist tradition and were influential in their communities, but produced relatively little. William Anderson (1889–1955), at Auckland, brother of John at Sydney, was the only New Zealand philosopher that seemed to retain Idealist views.

In Australia, materialism gained prominence through the work of John Anderson, who arrived in Australia in 1927, and continues to be influential. John Anderson had been a student of Henry Jones, who can be said to have influenced both strands of Australian philosophical thought.

The Idealism found in Australasian philosophy is best characterised as a set of concerns rather than as a single body of doctrine. Starting from consciousness, and particularly from moral activity, Idealists pursued moral, metaphysical and religious themes, underpinned by a unified account of the world. They gave appropriate acknowledgement to the development of materialist natural sciences. They accepted the findings of natural science in all areas other than those that related to self-consciousness and the activities of mind. They refused to reduce consciousness or mind to matter, and made mental and moral experience central to their account of the unity of the world. The Idealists took little interest in, and were often hostile to, other doctrines that have sometimes been labelled ‘Idealist’, such as the attempt to resolve all reality into mental phenomena (see, for example, Miller 1930: 10).

The early Australasian philosophers taught across very wide fields. Francis Anderson was to teach ancient thought, modern philosophy, ethics, metaphysics, logic, psychology, politics, sociology and economics. Mitchell said his chair was more like a sofa, since it was to cover philosophy, economics, literature, education and psychology (Smart 1962). MacGregor had both medical and philosophical training, and was a surgeon, before later becoming Inspector of Lunatic Asylums (Tennant 1993).

Throughout the period of Idealism’s intellectual dominance, philosophy was conceived in relation to psychology, and particularly in terms of how they should be distinguished from each other. The main professional organisation was the Australasian Association of Psychology and Philosophy. Its journal published articles in each field, dropping the term ‘psychology’ from its title only in 1946. Whilst there was interest in empirical psychology conducted in the laboratory, there was an important distinction to be made between this form of psychology, allied to natural science, and the philosophical psychology that served as a path into broader, metaphysical, thought. This psychology was characterised, in the work of Henry Laurie, William Mitchell and W. R. Boyce Gibson, by a focus on the method and results of introspection and conceptual clarification. It served to establish key philosophical themes, including the relationship of mind and matter and the status of the ‘self’ or personality. Gibson, for example, was a noted ‘personal Idealist’, who placed considerable emphasis on the irreducibility of ‘personality’ (Helgeby 2006).

When seen in the context of later developments in philosophy in Australia, the Idealists appear to eschew technical argument and definition. Their approach to philosophy appears more literary in orientation, but also directed to a wider audience. A particular strength of the Idealist approach to philosophy was the ability to bring multiple critical and constructive perspectives to bear on an issue. An example of this was the tendency to approach a key theme, such as the concept of personality, simultaneously from a moral, metaphysical and psychological perspective. Idealists typically sought to grasp an issue in its fundamentals, and to clearly convey the insight they had gained.

As in Britain, Idealism in Australia was a philosophy of engagement with moral and social issues and many of its adherents made significant practical contributions to society, education and politics. Francis Anderson was influential in the reform of teaching in New South Wales, and through the League of Nations Union. Mitchell became a prominent university administrator (Edgeloe 1993), as did others such as E. Morris Miller (1881–1964), influenced by Kant and Laurie. In New Zealand, MacGregor was an advocate of social Darwinist policies, having retained much of the influence of Herbert Spencer, who was widely reviled by most Idealists (Numbers and Stenhouse 2000; Tennant 1993). William Anderson engaged with issues associated with educational reform and the development of ‘self’ and character (Anderson 1928, 1944).

Between the 1940s and today, Idealism declined as an intellectual movement. Philosophers such as A. Boyce Gibson, son of W. R. Boyce Gibson and his successor as professor at the University of Melbourne, were frequently called Idealists. While he had been influenced by both ‘absolute’ and ‘personal’ Idealism, he characterised himself as a kind of empiricist, but one who was ‘wide of the type’ (Gibson 1970: 101). Only in the 1980s did a significant reappraisal of British Idealism begin, influenced by a perceived alignment between modern ‘communitarian’ social philosophy and the social and political ideas of Green and his successors. More recently, it has been the social and political theory of Australian Idealists that has kept their memory alive (Hughes-Warrington and Tregenza 2008). But Idealism was a far broader movement and it is only through its metaphysical and moral dimensions that its social philosophy can be understood.

 

(Further reading: Anderson, F. 1922, 1935; Anderson, W. 1944; Dunlop, F. 1908; Gibson, W. 1902, 1904, 1907, 1909, 1914; Laurie 1902; Mitchell 1907, 1926, 1929; Quaife 1872; Salmond 1888, 1911.)

Identity Theory of Mind

Daniel Stoljar

The identity theory of mind says that the mind is—i.e. is identical to—the brain, and in particular that individual mental states, such as being in pain, suddenly remembering where your car keys are, or smelling a lemon, are identical to particular physical states of the brain. The theory played an important role in the development of Australian philosophy. It was formulated and defended by Australian and Australian-associated philosophers in the 1950s and ’60s, in particular U. T. Place, J. J. C. Smart, D. M. Armstrong and David Lewis, and the basic principles of the theory continue to have a significant presence in Australian philosophy, for example, in the program for metaphysics set out in Jackson (1998b).

Why does the identity theory require defending? Isn’t it a part of scientifically informed commonsense that the mind is the brain? So it might initially seem. But for many philosophers in the 1930s and ’40s, the identity theory involved a logical confusion, a category mistake as Ryle (1949) put it, and so could not be true, regardless of how appealing it might seem on the surface. It is not that such philosophers were dualists in the traditional sense; far from it. Rather their view was the mental sentences such as ‘I am feeling melancholy’ do not report facts, and related to this, that mental expressions such as ‘the feeling of melancholy’ do not denote items or things in the world and so do not denote things in the brain. If that is right, an identity statement such as ‘the feeling of melancholy = brain state S’ brings together expressions that have different logical jobs, and so is itself neither true nor false. One motivation for this view is that it seems to provide a way to dissolve the traditional mind-body problem, i.e. the problem of whether materialism or dualism is true. For if ‘the feeling of melancholy = brain state S’ is a category error, so too is ‘the feeling of melancholy = spiritual state S*’. Hence both sides in the traditional mind-body debate are confused. Another motivation is the simpler one that mental terms do not seem to mean the same thing as physical terms, and indeed are drawn from quite different linguistic and conceptual environments.

It was against this background that the identity theory was developed. Following Stubenberg (1997), one might divide the identity theory in the 1950s and ’60s into two versions. One version is associated with Herbert Feigl (see Feigl 1967) and other philosophers who had moved from Europe to the U.S. in the 1930s and ’40s; Stubenberg calls this the ‘Austrian’ version. Another version is associated with philosophers either working in Australia or who spent considerable time in Australia—the ‘Australian’ version. One thing that divides the Australian version from the Austrian version is that the Austrians were more concerned with the physical part of the identity theory, i.e. with the question of what brain state or physical state various mental states are to be identified with. For the Australians, by contrast, the emphasis was more on the nature of mental states, and in particular on providing an analysis of what various mental states consisted in. (It is worth noting, however, that later developments in Australian philosophy of mind seem in considerable sympathy with the Austrian version of the identity theory; see, e.g. Chalmers 1996 and Stoljar 2001.)

Within the Australian version of the identity theory, one might make a further division between two temporal phases in its development. U. T. Place and J. J. C. Smart, then colleagues at the University of Adelaide, are the key figures of the first phase. In Place’s 1956 paper, ‘Is Consciousness a Brain Process?’, we find an acceptance of the point that identity statements such as ‘the feeling of melancholy = brain state S’ bring together expressions that have very different meanings and so are not true by definition. But even if not true by definition, Place insisted that they are (on occasion) nevertheless true. To be more precise, Place insisted on something close to this. For in fact Place’s focus was not on the identity statements as such, but on statements such as ‘her table is an old packing case’, which he thought of as involving what he called the ‘is’ of constitution. It is natural to read Place here as saying that the mind is constituted by the brain rather as a table might be constituted by an old packing case. In turn, this seems to entail that a table (or the mind) might have properties that the old packing case that constitutes it (or the brain) does not. But then Place is not defending an identity theory strictly speaking. The reason is that the sense of identity that is at issue in these debates is the logical one, according to which if x = y, then every property of x is a property of y; correlatively, if even one property of x is not a property of y, then it is not the case that x = y.

Like Place’s ‘Is Consciousness a Brain Process?’, Smart’s major paper ‘Sensations and Brain Processes’ (1959b) did not argue positively for the identity theory. Rather Smart’s stated goal is to remove any conceptual barriers to that theory. (Smart in fact says that his paper is simply a complement both to Place’s paper and to Feigl 1967, the key document of the Austrian version of the identity theory.) What distinguishes Smart’s contribution, however, is his very sharp focus on identity statements, and therefore on logical features of the identity theory. In particular, Smart’s point was to exploit Frege’s distinction between the sense of an expression and its referent in order to defend the identity theory from the objection that it is a category error. According to Smart, a statement like ‘the feeling of melancholy = brain state S’ brings together expressions, not with different logical jobs, but with different senses; hence such statements might be true if the referents of the relevant expression are identical.

While ‘Sensations and Brain Processes’ contains responses to a lot of objections to the identity theory, perhaps the most famous is Objection #3, which was advanced by Max Black, and is sometimes called ‘Black’s Objection’. The objection is important because in Smart’s reaction to it we see ideas that later emerged as functionalism, are important to Lewis’ and Armstrong’s later versions of the identity theory, and which generated a literature on what is now called ‘the property dualism argument’ (see, e.g. White 2002). In essence the objection is this. Take ‘the feeling of melancholy = brain state S’ again. As we have seen, according to Smart, this statement brings together expressions with different senses, or, to put it in Fregean terms, is a cognitively significant identity statement. But it is plausible to suppose in general that if ‘a=b’ is cognitively significant, then there must be some property F expressed by ‘a’, and some property G expressed by ‘b’ which are distinct from each other. (It is the properties that are distinct here, not a and b which are of course identical.) So, in particular, if ‘the feeling of melancholy is brain state S’ is true and cognitively significant, there must be a property expressed by ‘the feeling of melancholy’ (call it ‘Mel’) which is distinct from whatever property is expressed by ‘brain state S’. But what this suggests is that the truth of Smart’s identity theory is compatible with a version of property dualism, the reason being that the property associated with the feeling of melancholy—i.e. Mel—might be (for all that has been said) a non-physical property. But Smart certainly intends to rule out property dualism; his whole point is to defend materialism. So it is required of him to say, not only that the feeling of melancholy is some brain state, but also that Mel is identical to some physical property. But at this point a problem of principle emerges. For suppose there is some physical property, call it ‘Phel,’ which is such that ‘Mel = Phel’ is true. Presumably this identity claim is cognitively significant just as the original one was; hence Black’s objection emerges all over again, and a regress looms.

To block this regress, Smart made two suggestions. The first was that it is not necessary, for the defence of materialism, that Mel be identical to a physical property; it is sufficient if it is identical with a topic-neutral property, where a ‘topic-neutral’ property is one which is neither physical nor mental but is instead quasi-logical, i.e. involving such general notions as causation, tendencies, dispositions and so on. The second suggestion was that there is a topic-neutral property, call it ‘Tel’, which is such that the identity statement ‘Mel = Tel’ is true and yet is analytic or true by definition (though perhaps a rather unobvious definition). In effect, this involves a modification of the idea that the identity theory is a hypothesis unrelated to considerations of meaning. Smart did think that the statement ‘the feeling of melancholy = brain state S’ was a hypothesis which had nothing to do with the meaning of the terms. But he nevertheless thought that, in order to respond to Black’s objection, there must be an analysis of the sense of ‘the feeling of melancholy’ into non-mental, topic-neutral terms. So Smart’s proposal is that the referents of mental terms are physical but that the senses of such terms are topic-neutral. This puts him in direct conflict with philosophers as different from each other as Ryle (1949) and Kripke (1980), both of whom deny that there is any definition of the mental into anything else at all.

Smart’s paper was distinctive because of its focus on logical features of identity and because of its appeal to topic-neutral language. It was distinctive in two further ways as well, ways that mark a division between the first phase of the Australian version of the identity theory and the second phase, in which Armstrong and Lewis are the key figures. First, Smart (and Place) insisted on a divorce of sensory and perceptual mental states, on the one hand, from cognitive and conative mental states (i.e. beliefs and desires) on the other. Indeed both Smart and Place believed that Ryle’s behaviourist position was correct about the latter. By contrast, the positions developed by Armstrong and Lewis in the mid 1960s were intended to apply to all mental states whatsoever. This is particularly the case in Armstrong (1968), where the theory is applied to many kinds of mental states, including beliefs, perception, sensations, and introspection.

Second, Smart thought that the identity theory was an empirical hypothesis in the sense that it was made probable by scientific observations, but as we have seen he did not argue positively for the identity theory. Both Lewis (1966) and Armstrong (1968) did provide such a positive argument however; indeed, it is no exaggeration to say that this argument provides one of the lasting insights of the Australian version of the identity theory. Taking pain and c-fibres as a classic example of a mental and physical state respectively, the argument can be set out as one in which the key statement of the identity theory (3) follows logically from two others, and in which the support for these two premises is mentioned in parentheses:

1. Pain = the state that occupies causal role R (by conceptual analysis)

2. C-fibres firing = the state that occupies causal role R (by empirical inquiry)

3. Ergo, pain = C-fibres firing (by the transitivity of identity).

This argument is simple on the surface but it contains a number of complexities, complexities well brought out in the extremely rich discussions of these matters by Lewis and Armstrong. An element that is central for Armstrong is the idea present in both (1) and (2) that pain is the state that occupies a causal role R. In developing and defending this idea, Armstrong connects it with another strand of thinking in Ryle, his account of dispositions, and via that with the discussion of dispositions. An element that is central for Lewis is the spelling out of what ‘causal role R’ might come to. Lewis’ discussion of these matters connects the identity theory to ideas about definition associated with Ramsey and Carnap (see, e.g. Lewis 1972). A third element, important for both Lewis and Armstrong, is the claim that (1) is true if it is by conceptual analysis. This element is required by Lewis and Armstrong for much the same reason it was required by Smart, viz. to respond to Black’s objection.

How plausible is this argument and its conclusion? One objection that was historically important is the so-called ‘multiple realisation’ objection. According to this objection, which is promoted by Putnam (1968) and Fodor (1974), mental properties cannot be identical to physical properties, for physically very different creatures (e.g. Martians) could all instantiate the same mental property. This objection received a classic treatment from the point of view of the identity theory in Lewis (1983a). A different objection targets the claim that (1) is true by conceptual analysis, and so is necessarily true. This is the part of the Lewis-Armstrong argument for the identity theory that is extremely controversial. Some, influenced by traditional dualist thinking, argue that no such conceptual analysis is possible in this case. Others, influenced by Ryle and others, continue to suppose that mental and non-mental terms do different jobs and so cannot be conceptually equivalent. Still others, influenced by Kripke and others, say that there are no conceptual connections here at all. These issues, and issues like them, are still very much a part of contemporary philosophy, and not simply of philosophy of mind. This goes to show how deep the Australian contribution to the identity theory runs.

Induction, The Problem of

John Fox

Introduction

Statements merely about what we have experienced do not entail generalisations, or predictions; yet in everyday life and in science accepting some of these is indispensable. The classical problem of induction was that of justifying such acceptings on empiricist standards. But for any positivistic philosophy, which valorises science yet insists on rigorously logical arguments from strictly empirical premises, the problem of justifying such accepting is acute; indeed, insoluble. In the mid twentieth century, when such philosophy was dominant, this was widely accepted as an awkward but demonstrated fact of life. Salmon reported it as a consensus. The Humean condition, Quine said, was the human condition.

An obvious move might be to extend logic beyond deduction; so Carnap proposed an analytic inductive logic. But this could not do the trick. For analytic inferences are non-ampliative, and so do not yield conclusions about the future, the general or the observed, even with probability, from premises merely about the past, the particular or the unobserved.

Now any demonstration fails unless its premises and form of inference are all more secure than the negation of its conclusion; for otherwise it is at least as reasonable to reject what is not so secure as it is to accept the conclusion. And some general beliefs, or beliefs about the future, are so intuitively reasonable that they count as relatively hard facts, refuting theories of rationality that if true would rule them out.

Even if the classical problem is recognised as insoluble, and the standards are relaxed, the problem of justifying such beliefs (and such modified standards) remains, and deserves the title ‘problem of induction’. But this problem transforms into the much more general one of providing an adequate epistemology. So I will not deal with it in this sketch.

Many intelligent philosophers (e.g. Popper, Goodman, Feigl) who saw the classical problem as insoluble devised different problems, closely enough related to Hume’s to earn the label ‘problem of induction’, but which possibly could be solved. I will consider three Australian philosophers (Ellis, Clendinnen and Stove) who arguably made significant contributions to solving problems of this kind, explaining briefly in each case what particular problems they were addressing, barely touching on relevant parts of the international scene. I will also briefly mention some work that has helped clarify or reconceptualise the problems or helped settle some questions on their fringes.

Vindication?

Feigl (1950) drew a contrast between validation and vindication. It relies on a controverted contrast between the epistemic and the merely pragmatic. To validate a belief is epistemic and evidential: it is to show that it is true, or objectively probable, in the light of evidence. (Validating an inference can be characterised analogously.) Validating induction he saw as the insoluble problem. For his alternative problem he coined the term ‘vindicating induction’. I shall use it more generally for any alternative problem of this kind: to vindicate a belief (or inference) is to show, or provide good reason to accept, the reasonableness of accepting (or making) it, in some way that nevertheless does not provide epistemic or evidential support for it; i.e. which provides no validation. A paradigm of ‘vindication’, of belief in God, is Pascal’s ‘wager’. Reichenbach (1938) and Salmon (1955, 1961) would develop vindications of induction. In Australia, Ellis and Clendinnen did so in rather different ways.

Ellis (1965) accepted the validation/vindication distinction and undertook to provide only the latter. He criticised the straight rule of induction, which Reichenbach and Salmon had ‘vindicated’, on many scores: for example, because its chosen desirable features (e.g. yielding only mutually consistent predictions) are lacked by the scientific methods we sensibly do use, and because in many contexts we would not consider it reasonable to follow (if a carefully examined, homogeneous and symmetrical coin came up heads in 550 of 1000 tosses, we would not extrapolate to a long-run relative frequency of 55%). Ellis argued that no rules determine on the basis of known facts about particulars what unique theory they best support. So inductive rules (understood as determinative rules for making probability judgments solely on the basis of such facts) are not enough.

Ellis also argued that scientific theories have a legitimate role in what probability judgments we should make. For without such theoretical understanding, we would have only syntax to guide us in our use of inductive rules. But, he argued (generalising Goodman’s ‘grue’ cases and the curve-fitting problem), it is mathematically demonstrable that if syntax were our only guide, any inductive rule could be used to justify any of an infinity of possible projections of any sequence of events. If it were rational to reject our theoretical understanding of a situation solely on the basis of conjecture about the future, all such understanding could rationally be rejected out of hand. So such rejection is irrational. Theoretical involvement is necessary for rational non-demonstrative argument. So, he concluded, our projections of the future should preserve our theoretical understanding of reality unless or until we are forced by experience to change it.

In effect Ellis defended principles of theoretical and conceptual conservatism. His picture of scientific practice was like that propounded as normal in Kuhn’s The Structure of Scientific Revolutions (1962), and he was vindicating the rationality of scientific practice as described by an account that was widely being condemned as irrationalist.

Arguably, despite what he thought at the time, Ellis’ was not a vindication in Feigl’s sense, but a contribution to a less empiricist theory of rationality. Later, Ellis (1998) offered as a new and more comprehensive solution something which was explicitly not a mere vindication. For it was based on epistemic values, by contrast with mere instrumental utility. But I am not here discussing alternative epistemology.

If validation is impossible, even pragmatic vindications face a severe problem. To calculate the expected utility of some course of action one needs estimates both of the desirability of the outcome and of the probability of the action having that outcome. Without the latter, the ‘validation component’, the vindication does not get off the ground.

Would-be validators have on the whole been sophisticated philosophers, and take this point. Reichenbach and Salmon, for instance, claimed not the utility of their variants of induction, but rather their conditional utility: that if any predictive rule had utility of a certain sort, their variants would.

John Clendinnen (1966) developed this somewhat complex strategy. Here’s a sketch of the argument. By an ‘objective’ method he meant one that ensured that those who followed it agreed; by ‘concordant’, he meant not giving rise to contradictory conclusions. He argued that induction was the only objective method of prediction, that objective methods are the only ones we could have good reason to believe were concordant, and that being concordant was a necessary condition for success. So, he argued, we should prefer objective methods, and so induction. Vindication!

An important exchange between Frank Jackson (1970a, 1970b) and Clendinnen (1970) advanced the discussion. Jackson supposed the rationality of induction, but argued against the particular vindication. He argued, for example, that concordance was a necessary condition not only for ensuring success but for ensuring failure, so that it was not a necessary condition for ensuring success rather than failure, which was what we desired; that it can be more rational to follow a method that merely happened to be fairly successful rather than an unsuccessful one that had a feature necessary for being shown capable of reliable success; that it can be more rational to follow a method that has that feature (though we can’t show it) and is in fact fairly successful to one which we can show has that feature but is not.

In reply, Clendinnen rejigged his argument. His basic claim, in the end, was that if the world were orderly enough, induction would work; but if it were not, there would be no rational way of predicting. Jackson did not disagree. But he pointed out that if these were both true, then induction would only be rational if it would work. So it could only be shown to be rational by showing something that, with Clendinnen’s claims, implied that it would work.

Surely a chess commentator would put a double exclamation mark by this. For Jackson’s point shows that even if induction can be vindicated Clendinnen-style, it is only to the extent that it is validated. But the point of vindicating rather than validating induction is to show that induction is rational even though we can’t show that it will work.

I think that points like this last one were a main reason for the decline in popularity of the vindicators’ strategy.

Clendinnen (1982) provided a variant vindication, arguing that to trust any alternative methods to induction was irrational, as all had the character of arbitrary guesses; and that a notion of simplicity, which ruled out Goodman-type anomalies, had to be understood as part of the specification of induction, and ensured that induction lacked this arbitrariness.

David Stove

Probably the most notable Australian philosopher of induction was David Stove. A great deal of his work was devoted to destroying inductive scepticism.

He took five tacks. He tried to refute inductive scepticism or closely related positions, using the calculus of probabilities, interpreted as a calculus of strength of arguments. He did this mainly in Stove (1973). In his witty polemic (Stove 1982) he tried to destroy the credit and repute of those he took to be the currently most influential inductive sceptics. Third, he argued that deductively valid arguments do not have various features they are commonly taken to have which provide desirable contrasts with inductive arguments (e.g. that all arguments of the same form preserve truth). Fourth, he bluffly asserted (or pointed out) the undeniable reasonableness of various inductive arguments, and their lack of need of further justification. Finally, he tried to provide something further (reason not the need), by supporting or improving a method fathered by D. C. Williams and grandfathered by Laplace. These last three tacks feature in Stove (1986). Only the first and the fifth need discussing here.

The first depends on the theorem that when P(p) and P(q) both ≠ 0, P(p/q)/P(p) = P(q/p)/P(q). So evidence alters the probability of an hypothesis in the same ratio as the hypothesis alters the probability of the evidence. Actually, Stove never pointed out quite this; he focussed on the special case where the initial probability of the evidence < 1 and the hypothesis entails the evidence, pointing out that this latter altering is an increase; so that here evidence indeed raises the probability of the hypothesis.

So, he argued, there are non-deductive arguments that raise the probability of their conclusions. This refutes both what he calls ‘deductivism’ and what late in his book he calls ‘Hume’s inductive scepticism’. All the variants of such refutations by Stove or his students (e.g. Gemes 1983, 1989) took standard probability axioms as self-evident, interpreted as a calculus of the strength of inferences.

Though he thought of this interpretation as Carnapian, I think it was original. Fox (1975) argued against it, from a Stovean assumption: that tautologies provide no grounds for inferring contingent claims.

Earlier in the book Stove had carefully argued that ‘it is not about all inductive inferences, but only predictive-inductive ones, that there exists an argument in Hume which is explicit and clear, and which ends in a sceptical conclusion’ (1973: 29). The example offered of such an inference was ‘This is a flame, and all of the many flames observed in the past have been hot, so this is hot’. Many reviewers (Blackburn 1973, Hinckfuss 1974, Fox 1975) pointed out that Stove’s argument does not refute scepticism about these. Their point, roughly, is that Stove’s argument shows that if you have won the first three legs of a quadrella your chances of winning the quadrella have gone up, but not that your chances of winning the final leg have.

Later Fox (1999) claimed to refute an even weaker version of deductivism, by showing that certain invalid arguments cannot consistently be deemed unsound. He called an argument ‘sound’ just if whenever its premises are true, it is reasonable to accept its conclusion. Proof examples are trivial: e.g. any instance of ‘It is reasonable to believe that p; therefore, p’. He called such arguments ‘epistemic syllogisms’. Though this proof was trivial, he claimed it was of some interest in that epistemic syllogisms were required to conclude, for example, Newton’s otherwise deductive derivation of the law of universal gravitation. This was partly in reply to some of the argument in Alan Musgrave (1993: esp. 170–5). Musgrave had deviated from Popper to the extent of agreeing that tentative acceptance even of laws and theories could be rational, but argued that the good reasoning involved in displaying such rationality could be construed as involving only deductively valid inferences. Newton’s argument for gravitation was a paradigm of such.

Fox’s conclusion was overstated. For the inconsistency is shown only on the assumption that some judgments of reasonableness are fallible, and a deductivist can consistently deny this. In refuting deductivism given fallibilism, all that is shown inconsistent is deductivist fallibilism.

Fox also tried to defuse some apparent oppositions between partisans of induction and of deduction, by arguing that precisely if there are sound inductive forms of argument, science can be given an adequate rational reconstruction using only noninductive forms of argument; but also that precisely if scientific argument can be so analysed, there must be sound inductive forms of argument.

Stove’s later positive work on induction was a much more sophisticated attempt to use probability theory. Whether or not it was an improvement on the much neglected Williams (1947), it was important in bringing the latter back to general attention.

Stove interpreted Bernouilli’s ‘law of large numbers’ as involving two different interpretations of the probability calculus: one to do with proportions or frequencies of attributes in populations, the other his logical interpretation, to do with strength of arguments. The law so read stated the strength of certain population-to-sample (direct) inferences. Laplace had tried to derive a similar theorem about sample-to-population (inverse) inferences, but the derivation had been widely criticised for relying on a dubious ‘principle of indifference’. Williams had tried to rehabilitate a broadly Laplacean approach.

Williams had in effect appealed first to the intuitiveness of the following claim about paradigm population-to-sample inferences: that the strength of (e.g.) the argument ‘90% of As are Bs; this is an A / this is a B’ is 0.9. His crucial ingenious move was then to take as his As, samples, and as his Bs, samples matching the population in some relevant respect and to some desired degree of accuracy; so that if the respect were ‘being C’ and the desired degree ‘within 3%’, a sample of which 54.5% were C would match a population in which 57% were C. It can be shown that even with samples of fairly small sizes—3000, say—and considerable margins of error, the vast majority of samples match the population. So, simply applying the statistical syllogism to these values of A and B, one gets in effect strong sample-to-population inferences. (Double exclamation marks.)

At first thoroughly persuaded, Stove came to see flaws in Williams’ argument and set himself to remedy them; his two readings of probability remedied some. Other problems had to do mainly with the generality of its conclusion. Thus concerned solely with rebutting inductive sceptics, and not with developing a theory of sound inductive inference, Stove focussed on a particular example and did not try to defend any general principle.

Patrick Maher (1996) provided a Bayesian critique of Williams-Stove. Adequate discussion of their work is still, I hope, to come. But even if successful, their work does not tackle the classical problem of induction. For they start by assuming as obviously acceptable the soundness of certain non-deductive inferences: statistical syllogisms. They claim that these entail such consequences as the validation of some generalising or sample-to-population inductive inferences. This claim of entailment is not incompatible even with Humean empiricism.

Frank Jackson, Robert Pargetter and John Bigelow

Various unpublished La Trobe papers had some influence in samizdat. Parts of a 1970 conference paper by Fox are summarised above in the introduction and in the argument that all vindication requires some validation.

A claim was (and still is) standardly made: that while merely adding new information as premises never alters the validity of deductive arguments, it can alter the soundness of inductive arguments. In an unpublished 1972 paper, in the course of clarifying the role of the principle of total evidence, Frank Jackson undermined this standard claim. The point was roughly this.

Sometimes we reasonably argue inductively, from facts A to conclusion B. Two assumptions are required for this inference to be reasonable. The first is the special case of some principle of total evidence, that when our total relevant evidence is A it is reasonable to infer B. The second is that A is our total relevant evidence as to whether B. Given both these and A, we can in fact deduce that it is reasonable for us to conclude that B. Further information can indeed lead us rationally to reject this conclusion; but it does so not by merely adding a premise, but by falsifying the second assumption above. The original argument is still valid; but with an essential premise no longer true, it no longer counts as a good argument for its conclusion.

Robert Pargetter and John Bigelow acknowledged Jackson’s influence in their 1997 paper, ‘The Validation of Induction’. This did not address that topic in Feigl’s or my sense; rather their claim, like Musgrave’s, was that even in sound inductions only valid arguments were involved. Their important argument generalised Jackson’s.

Take any good inductive argument with premises P and conclusion C. It is only good, if it is reasonable in those conditions, given its premises, to accept C. Call some suitable spelling out of the conditions ‘M’, and consider the argument with premises M and P, and the conclusion C*: ‘It is reasonable to accept C’.

Pargetter and Bigelow’s key assumption was that truths of the theory of rationality were like truths of mathematics; in particular, that claims like ‘Given M and A, C*’ are, if true, necessary. Not epistemically necessary (they were not infallibilist about epistemology); but still, true necessarily (in all worlds). So they analyse any good inductive argument from A to C as enthymematic for one from M and A to C*, and this in turn as having a suppressed major premise which is a necessary truth; so that all good inductive arguments are deductively valid!

Tim Oakley (1998) argued, against Pargetter and Bigelow, that there were irreducibly sound inductive arguments which shared a property of cogency that a deductivist analysis did not capture. In reply, Bigelow and Pargetter (1998) argued against this, but somewhat modified their original conclusion by taking on board the (then unpublished) argument of Fox (1999), that though in such arguments C* is reached by deductively valid means, C itself is not, for reasonableness does not entail truth; so that even on their construal, the inductive argument to its original conclusion is not valid, requiring also an epistemic syllogism.

The Rationality of Induction as Contingent on the Way the World Is?

Some (e.g. Kornblith 1993) had argued that induction works because of the way nature is. This takes the success of induction for granted and proposes an explanation of it, and so does not answer a Humean sceptic. Howard Sankey (1997) argued rather that induction is justified because of the way nature is; and so that a cluster of metaphysical doctrines about the existence of natural kinds provides a solution of Hume’s problem.

When our predicting that unobserved As are Bs is usually correct, what makes it so might be that As are by nature the kind of thing that is essentially B; though it might (in general, or just for different values of ‘A’ and ‘B’) be merely that nearly all As happen to be Bs. Any of a variety of ways the world may be would ensure our being usually right. But what does this have to do with rationality?

Stove (1986: ch. 1) had devastatingly mocked the enterprise of justifying induction on the basis of assumptions about nature. His mockery rested on the intuition that ways things are contingent, but that inductive support is a matter of pure logic, and as such, necessary.

But some theory of rationality may consistently present what manners of inferring are rational as contingent on the way the world is; e.g. on what makes such manners as a rule likely to lead to truths. Bigelow and Pargetter suggested that in a theory of pure rationality that comprises only necessary truths, some such truths are of the form: ‘In circumstances C, it is rational to trust method M’. Sankey can be construed, in their terms, as suggesting that we want a theory of what methods it is actually rational to trust, that induction is such a method M, and that such a theory will be contingent on C (e.g. an abundance of natural kinds) and therefore will not categorically pertain to the theory of pure rationality.

So such a strategy cannot be ruled out. For it to yield a solution rather than a sketch of one, I suggest, it requires a defensible theory of pure rationality. Sankey’s sketch postulates as rational inference to the best explanation. He hints that the success of science suggests as a best explanation a special fit between the mind and the world, a somewhat neo-aristotelean conclusion; given such a fit, he argues that induction is likely to be successful.

Sankey’s strategy, therefore, is to assume as intuitively sound a particular non-deductive kind of inference—in this case, to the best explanation—and thence to argue that some straight inductive inferences, usually considered much more problematic, are also sound. Interestingly, the Laplace-Williams-Stove strategy does this too, choosing instead as more basic the statistical syllogism.

Bayesianism

In some ways Bayesianism has continued in a modified way Carnap’s project of inductive logic; but it has taken on board almost all of Popper’s critique of inductivism (see Fox 2000). Probably most recent Australian work on induction has been done in a more or less Bayesian framework. I don’t know of any that has notably advanced the framework, but one development I find of much interest. It was provoked largely by challenges Lakatos and Feyerabend provided in the 1970s. Lakatos had argued that philosophies of science should be evaluated largely by their adequacy to the history of science, and that all but his own fell foul of this test; Feyerabend had argued that all accounts of rational methodology that also deemed much recognised great science rational could be rebutted by showing that their paradigm examples did not fit their principles. Both were scathing about the inadequacy of all inductivisms.

Jon Dorling (1979) rose to these challenges by arguing that the Bayesian analysis of the history of science explained vastly more as rational than any Popperian or Lakatosian account could, and in particular it could provide a general solution of the Duhem problem—that of explaining when, in the face of unexpected results, it was rational to reject a core theory, and when to reject instead some auxiliary hypothesis. He illustrated this with a case in which Bayesian moves overwhelmingly dictated retaining the theory and abandoning the auxiliary hypothesis. His paper was intended to function as a Kuhnian paradigm, and to provide the basic techniques for solving many other historical conundra. It has not had the attention it deserves, but it has inspired a few impressive pieces of research and analysis.

In Australia Neil Thomason (1994) in particular contributed to this program, arguing, for example, that the very arguments of Galileo that Feyerabend had cited as showing that he flouted all proposed canons of rationality could be plausibly and naturally construed as straightforwardly Bayesian reasonings.

Maureen Christie (2000) presented a case history of the discovery of the cause of the hole in the ozone layer. The participants, under Popperian influence, thought of the role of the spectacular observations as refuting a dominant theory, but no: it was only via the support they gave to a rival theory, previously considered improbable, that the dominant view was discredited. This ‘ultra-inductivist’ acceptance of the chlorine theory also fits a Bayesian picture like a glove. The acceptance of statistical thermodynamics on the basis of Einstein’s analysis of Perrin’s experiments on Brownian motion—which Feyerabend had misdescribed as a Popperian falsification of classical thermodynamics—was a comparable case; as indeed was taking the phases of Venus as discrediting Ptolemaic models on the basis of the support they gave to Copernicanism.

A Companion to Philosophy in Australia and New Zealand

   by Graham Oppy, N. N. Trakakis