Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

A Companion to Philosophy in Australia and New Zealand

C


Canberra Plan

Daniel Nolan

The expression ‘Canberra Plan’ has two connected uses. The primary use is to pick out a particular kind of philosophical analysis. This form of philosophical analysis is a close relative of the so-called ‘Ramsey-Carnap-Lewis method of defining theoretical terms’; some would even use the two expressions as synonyms.

The less common use is to pick out a cluster of views associated with staff and students in the Philosophy Program of the Research School of Social Sciences at the Australian National University, particularly during the 1990s. As well as including a commitment to the Canberra Plan in the first sense, the Canberra Plan in this second sense included various metaphysical commitments like physicalism and four-dimensionalism in the philosophy of time; and philosophy of language commitments like the use of two-dimensional semantics. The remainder of this entry will concentrate on the first, narrower sense of ‘Canberra Plan’.

The origin of the expression ‘Canberra Plan’ was in drafts of O’Leary-Hawthorne and Price (1996), where it was used as an uncomplimentary label for the method of philosophical analysis described above. As O’Leary-Hawthorne and Price say, ‘Canberra’s detractors often charge that as a planned city, and a government town, it lacks the rich diversity of “real” cities. Our thought was that in missing the functional diversity of ordinary linguistic usage, the Canberra Plan makes the same kind of mistake about language’ (O’Leary-Hawthorne and Price 1996: 291, n23). The label has stuck: see, for example, Lewis (2004: 76 and 104, n3), who talks about the ‘Canberra Plan’ for causation, referring to the theories proposed in Tooley (1987) and Menzies (1996). A number of ‘Canberra Planners’ have recuperated the expression and would use it to describe their own projects.

In many ways the centrepiece of the Canberra Plan is the Ramsey-Carnap-Lewis approach to theoretical terms. Those interested in the technical details should consult Lewis (1970) and Lewis (1972): only an informal characterisation will be given here. The core of the method is to treat the targets of philosophical analysis to be implicitly defined by a defining theory. The defining theory is then manipulated to give us a ‘role’ expressed in relatively neutral terms: for example, applying the Ramsey-Carnap-Lewis approach to folk psychology is supposed to provide, at least in principle, a specification of what beliefs, desires, etc. are entirely in terms of their typical causal role in being produced by sensation and causing bodily movements. Once we have a role (e.g. for beliefs and desires), we look to our theory of the world to tell us what it is that plays that role: what the best deservers are. According to Lewis (1970), the best deservers to be the referents of expressions like ‘belief’ and ‘desire’ are certain brain states, since these are the things with the right sorts of causal relationships to sensation on the one hand, and behaviour on the other.

Sometimes when we use a defining theory of X to tell us that X is whatever satisfies such-and-such conditions, we end up with a role that nothing satisfies perfectly. For example, Jackson and Pettit (1995) suggest we use something like folk morality to yield a ‘role’ played by moral goodness, fairness, justice, right action, and the rest. But given the vagaries of ordinary moral opinion, it is unlikely that anything will have all of the features attributed to, e.g. moral goodness. The best deserver need not satisfy all of the conditions specified by a role: it is only important that it satisfy enough, and satisfy more than its rivals.

So far, then, we have a proposal for producing the definitions of problematic expressions and a way of establishing identities between theoretical posits (e.g. mental states and brain states). Two other things need to be supplied before such a process can be carried out. We need to be able to come up with a suitable defining theory in the first place, and we need to know what there is in the world to serve as potential ‘best deservers’, or role-fillers, for the second stage of the process. Canberra Plan analyses typically have distinctive approaches to these two questions as well.

When dealing with theoretical terms in the sciences, it might be reasonable to expect that we can find a canonical theory, or group of canonical theories, to use as our basis. But there is not usually a ready-made theory to use when we want an analysis of free will, or right action, or truth, or properties. A procedure Canberra Planners often engage in at this stage is called ‘collecting the platitudes’. The platitudes concerning a certain topic are significant truths about that topic that are implicitly believed by most, or all, competent speakers. This technical use of ‘platitudes’ is to be distinguished from the ordinary meaning of that word—there is no requirement that the Canberra Plan platitudes should be immediately obvious or apparently uninteresting. When the Canberra Plan is applied to the philosophy of mind, the aim is to articulate folk psychology: by analogy, the aim when gathering the platitudes about other topics is often articulate ordinary or ‘pre-theoretic’ commitments about the topic.

There is much less discussion in the current literature of how to determine what potential role-fillers there are than there is about how to determine what the roles are. Many of the central Canberra Planners, like David Lewis, Philip Pettit and Frank Jackson (these days) are naturalists and physicalists, and so in many cases Canberra Planners will assume that the sought-for deservers will be physical objects or states or properties of some sort. Physicalism is not necessary to employ Canberra Plan methods, of course. But philosophers without a settled fundamental ontology face a significant question at this stage: where should we look to discover what potential ‘deservers’ there are to fill the roles specified by step one?

So far I have been describing the Canberra Plan method as something that collects all the relevant platitudes, constructs a role that only a few things might meet, and locates a best deserver for an eventual identity claim. But sometimes it can be useful to only carry out part of this program. For example, one could construct enough of the role, and be opinionated enough about what the range of potential deservers is, to answer some questions of philosophical interest, even if no specific theoretical identity claim could be established. In fact, this partial use of the plan was what happened in Lewis (1970) and Lewis (1972): Lewis did not purport to actually provide a complete folk psychological theory to be Ramsified, nor did he say exactly which brain states were identical with each belief or desire. Instead, Lewis provided only a few platitudes, and drew the general conclusion that beliefs and desires must be some brain state or other.

Philosophical analysis on the Canberra Plan is often associated with defending the respectability of a priori philosophical knowledge, and with armchair methods in philosophy more generally. (One of the seminal papers for the Canberra Plan was Frank Jackson’s 1994 paper called, ‘Armchair Metaphysics’.) This method would at least arguably give a priori results if the assembled platitudes implicitly defined the theoretical term in question. If it was analytic that X was whatever best played such-and-such a role, then the philosophical analysis produced would be a priori, at least given the usual assumption that analytic truths are a priori. This approach to the a priori is associated with Rudolf Carnap (see Carnap 1963: 958–63).

Key papers for inspiring the Canberra Plan are David Lewis’ (1970 and 1972), as well as the development of analytic functionalism about the mind in Lewis (1966) and Armstrong (1968). Later papers where Lewis discusses his method of philosophical analysis also influenced Canberra Planners (see Nolan 2005: 213–27 for a survey of Lewis’ method). As well as those papers, there are a number of works that stand as prime examples of applications of the ‘Canberra Plan’. Jackson (1998b) is probably the most central, a book-length defence of an approach to conceptual analysis in the Canberra Plan tradition. Jackson, Oppy and Smith (1994) is the paper that sparked the original ‘Canberra Plan’ label in O’Leary-Hawthorne and Price (1996). Tooley (1987) and Smith (1994) are both books that carry out Canberra-Plan-style analyses. Braddon-Mitchell and Nola (forthcoming) is a book of essays discussing the Canberra Plan by both its supporters and its critics.

 

(Work on this article was supported by a Philip Leverhulme Prize from the Leverhulme Trust.)

Canterbury, University of

Derek Browne

Few traces remain of the early history of philosophy at Canterbury. Courses in ‘mental science’ were taught from 1900, but there is no evidence that philosophy played anything other than a minor role in the life of the university for the next few decades. The future of philosophy at Canterbury even became a matter of public debate in 1936, when it was proposed to fill the newly created chair in philosophy with a psychologist. Public protests followed. Letters appeared in the local newspaper, arguing that the institution could not be counted a university until the chair was occupied by a philosopher. In 1937, I. L. G. Sutherland became professor of philosophy, even though he was a social psychologist. He remained in that position until 1952.

The profile of philosophy in the university was dramatically elevated with the arrival of the young Karl Popper in 1937. Popper brought modern European sophistication to the philosophy program, and intellectual provocation to the whole academic environment of an isolated, provincial university. He completed The Poverty of Historicism and wrote The Open Society and Its Enemies while at Canterbury, books he later described as his ‘war effort’ (Popper 1976: 115). Effort it must have been, for his teaching load was ‘desperately heavy’ (he had sole responsibility for teaching philosophy) and the university authorities were unhelpful to the point of hostility: ‘I was told that I should be well advised not to publish anything while in New Zealand, and that any time spent on research was a theft from the working time as a lecturer for which I was being paid’ (Popper 1976: 119). Popper made a lasting contribution to the whole university sector in New Zealand. He was instrumental in the promotion of a culture of research, and was a leading figure in the movement that transformed the University of New Zealand (as it then was) into a respectable research institution (Hacohen 2000: 499). He left New Zealand at the end of 1945.

A. N. Prior took over the philosophy program in 1947, as sole teacher of philosophy in what was now the Department of Psychology and Philosophy. An assistant lecturer was appointed in 1952, and Prior became the first real philosopher to occupy the chair of philosophy. He initiated a divorce between philosophy and psychology, and set about creating the modern philosophy curriculum at Canterbury. By 1951, a complete program in philosophy to Master’s level was available for the first time. And yet, like Popper before him, Prior rose above the daunting teaching load to produce a stream of publications of lasting significance. Logic and the Basis of Ethics was published in 1949, followed by major creative work in logic, including the invention of tense logic and fundamental contributions to the development of possible worlds semantics for propositional modal logic. Prior was publishing up to ten journal articles a year at this time. He left the university in 1958 to take up a chair at Manchester.

Michael Shorter had arrived from Oxford as a lecturer in philosophy in 1954, and he took over the chair in 1959. Under his leadership, Oxford styles of philosophy held sway in the department. The program flourished under Shorter, and by the time he returned to Oxford in 1970, the department had five full-time philosophers on board.

R. H. (Bob) Stoothoff took over the chair in 1970, and the department entered into a long period of stability. David Novitz joined the department at this time, and soon made an impact on the general intellectual life of the university, as well as making important contributions to philosophy of art internationally. The most significant publication of this era was the English translation in three volumes of the philosophical writings of Descartes, a project initiated by Bob Stoothoff, and carried out with Dugald Murdoch (also in the department at Canterbury at that time) and John Cottingham. For a time, Descartes was the most visible philosopher in the Department.

Graham Oddie took over the chair on Stoothoff’s retirement in 1994, but his stay was brief, and Graham and Cynthia Macdonald were appointed joint professors in 1998. With Jack Copeland, the Macdonalds continued the tradition of distinction in research at Canterbury. But their tenure too was brief, with Cynthia Macdonald leaving Canterbury for a chair in Belfast in 2005.

In recent times, under the influence of Copeland, the department has gained a reputation for work on Alan Turing, artificial intelligence, and theory of computation.

Philosophy at the University of Canterbury has been greatly enriched in recent years by a succession of Erskine Visitors, including Daniel Dennett, Fred Dretske, Ruth Millikan, Simon Blackburn, William G. Lycan, Stephen Stich, and a host of other prominent philosophers.

 

(Novitz [2008], was of great value in the preparation of this article.)

Causation

Brad Weslake

Philosophical work on causation in Australasia has been extraordinarily rich and diverse, and in a brief survey much important work must remain unmentioned. Here I provide a selective overview, designed to highlight particularly influential work and to indicate the diversity of the contributions made by Australasian philosophers.

Regularity Theories

I begin with the regularity theory of causation defended by J. L. Mackie (1965, 1974). According to Mackie, c is a cause of e just in case c is an insufficient but nonredundant part of an unnecessary but sufficient condition for the occurrence of e. This is often abbreviated as the claim that c must be an INUS condition for e (a term suggested by David Stove). This is a regularity theory because sufficiency is analysed in terms of causal laws, understood as a species of universal generalisation.

Mackie’s account has a number of problems. First, the account assumes, contrary to empirical evidence, the truth of determinism. Second, as Mackie himself recognised, the account does not provide an account of the direction of causation, since under the assumption of determinism if c is an INUS condition for e then e is also an INUS condition for c. Third, for related reasons also appreciated by Mackie, the account has problems discriminating events that are correlated because related as cause and effect, and events that are correlated because related as effects of a common cause. Finally, the account raises a puzzle concerning our knowledge of causal relations. Surely we are not in general in a position to know any of the conditions sufficient for a given event; but then how do we know that a given event forms part of such a condition?

Difference Making Theories

There is a natural move to make in response to these sorts of difficulties with regularity theories. Instead of analysing causation in terms of complicated regularity-involving conditions, analyse it instead in terms of the idea that causes make a difference to their effects. There are two main ways this basic idea has been pursued: (i) in terms of the idea that causes raise the probability of their effects; (ii) in terms of the idea that effects counterfactually depend on their causes. Both ideas have been pursued in great detail in the literature, but they face a number of problems.

First, there is the problem of pre-emption, involving cases in which there is a non-active backup cause, the existence of which shows alternatively that (a) the effect does not counterfactually depend on the actual cause, or (b) the probability of the effect is lowered by the actual cause, or (c) the probability of the effect is raised by the non-active backup cause.

Michael McDermott (1995, 2002) examines the pre-emption problem in the context of the counterfactual analysis of causation, and proposes a version of the counterfactual analysis that builds on the basic idea of the Mackie account. According to McDermott, a direct cause is a part of a minimal sufficient condition for an effect, as with Mackie. But a sufficient condition is analysed not in terms of causal laws but rather counterfactually, as a condition such that the effect would have occurred even if any other actual events had not occurred. Causation is then defined in terms of causal processes, which are in turn defined in terms of chains of direct causation.

Peter Menzies (1996), on the other hand, decisively raised the pre-emption problem for probabilistic theories of causation. The solution Menzies adopts is that the causal relation is to be theoretically identified as ‘the intrinsic relation that typically holds between two distinct events when one increases the chance of the other’ (p. 101). So on this view the difference-making relation is not definitive, but rather a defeasible marker, of the presence of the causal relation. One problem with this view is that it seems to rule out cases of causation by double prevention, in which causation seems to both depend on extrinsic facts and to occur independently of the existence of a physical relation between cause and effect.

A different problem concerning difference-making theories has been pressed by Huw Price (1992, 1996). Price argues that neither counterfactual nor probabilistic accounts of causation can ground the direction of causation, since the time-symmetric nature of the fundamental physical laws means that difference-making is symmetric in microphysics. Price argues that the best account of causation is instead an agency theory, according to which c causes e just in case bringing about c would be an effective means for an agent to bring about e. A similar account had been earlier defended by Douglas Gasking (see Oakley and O’Neill 1996). This account is defended against standard objections by Menzies and Price (1993). In subsequent work, both Menzies and Price have argued that causation is not purely objective, Price (2007) on grounds that it is constitutively connected with the time-asymmetric perspective of agents, and Menzies (2004, 2007a) on grounds that the most plausible counterfactual theories of causation require a contextually determined set of background conditions and default states against which difference-making counterfactuals are to be evaluated.

Process Theories

Appealing to a notion of causal process, as both McDermott and Menzies do, has been a popular way of responding to the pre-emption problem. Phil Dowe (2000) has elevated this idea into an analysis in defending a process theory of causation, on which causal relations between events are derivative on causal processes. Process theories of causation require a distinction between pseudo-processes and genuine processes, and an account of causal relevance. Dowe prefers accounts of these that rely on the notion of a process involving a conserved quantity. Process theories also require a distinction between causes and effects, and it is an issue with these theories that the resources required to make this distinction cannot be found among the phenomena with which they are centrally concerned. Dowe, for instance, prefers an account in terms of probabilistic relations that are only contingently satisfied by actual causal processes. There is also the question of how to reconcile the theory with our ordinary causal judgements, which seem to be justifiably made in the absence of evidence that would support any claim concerning conserved quantities. Finally, one of the central problems with any process theory is how to account for prevention, and causation by absence. Dowe (2001) claims that apparent causation by absence really involves a different relation, quasi-causation, defined in terms of counterfactuals. Among other problems, the question arises why, since quasi-causation plays the same role as causation in practical inference and explanation, it doesn’t count as real causation. If so, then Dowe has at best stated an interesting empirical fact about certain cases of causation, rather than an analysis of causation in general.

Non-Reductionist Theories

One of the intuitions behind process theories is that causation is an intrinsic relation. This intuition has been defended in a different form by D. M. Armstrong (1999). According to Armstrong, the concept of causation is a primitive. Causation is, however, to be empirically identified with the instantiation of a universal necessitation relation between states of affairs. Armstrong has also, famously, argued that causation may be perceived. Like Dowe, Armstrong has a prima facie problem handling cases of prevention and causation by absence, since he denies the existence of negative states of affairs. In response, he endorses Dowe’s appeal to quasi-causation.

Causation as Explanation

Finally, Michael Strevens (2004, 2007, 2009) has defended an interesting inversion of the standard view of the relationship between causation and explanation. The standard view of those who defend causal theories of explanation is that we first have a complex difference-making theory of causation, and then a simple theory of explanation according to which explanation involves citing causes. According to Strevens, causation is a relationship between basic physical events that may be analysed either in terms of a Dowe-style process view or a difference-making view restricted to maximally fine-grained events. Explanation, on the other hand, involves abstracting away from these causal details in various ways, in order to identify difference-makers. The abstracting procedure is similar to Mackie’s criteria for identifying causes (2004). Strevens goes on to claim that we can understand causal claims of the form c causes e as elliptical for claims of the form c causally explains e. Moreover, Strevens claims that this account solves traditional problems for difference-making accounts, such as pre-emption (2007). If so, then perhaps Mackie was on the right track after all.

Centre for Applied Philosophy and Public Ethics

C. A. J. Coady

The Centre for Applied Philosophy and Public Ethics (CAPPE) is an Australian Research Council (ARC) Special Research Centre (SRC) founded in 2000. It was originally a joint venture between the University of Melbourne and Charles Sturt University, but later (in 2003) incorporated the Australian National University as a partner. Its specific SRC funding from the ARC expired at the end of 2008, but the three universities have agreed to continue the Centre in a federated form on a renewable three-year basis.

CAPPE is Australia’s major unit for applied philosophy and one of the largest in the world. It arose out of a small but flourishing Centre for Philosophy and Public Issues at the University of Melbourne and a fledgling Centre at Charles Sturt University (CSU) with a similar orientation. The Directors of the two Centres, Tony Coady at Melbourne and Seumas Miller at CSU decided to make a joint application for a Special Research Centre when Coady’s efforts to secure the University of Melbourne’s support for a single bid based at that university were unsuccessful. (Applications to the ARC for SRCs could only be considered if nominated by the home universities with a maximum of six nominations allowed, but Melbourne, in its wisdom, decided to put forward no applications from the Humanities, and so a joint application had to be submitted through CSU, with Miller as the designated Director. It was CSU’s sole application.)

In its first decade CAPPE has operated within six broad program areas, the focus and description of which have changed slightly over that time. Currently they are: Criminal Justice Ethics; Business and Professional Ethics; Ethical Issues in Biotechnology; IT and Nanotechnology: Ethics of Emergent Technology; Political Violence and State Sovereignty; Justice and the Human Good. All programs have achieved remarkable success in terms of research output and all have been successful in attracting research grant income. Authored books, co-edited books, journal articles and book chapters have been produced in abundance and with both academic and general impact: these include to the end of 2008, 422 publications in refereed, academic journals, 50 major review articles, 405 chapters in academic books, 62 authored academic books and 76 edited academic books. CAPPE has attracted many of the best-known applied philosophers in the world to its three campuses either as temporary visitors or as members of staff: full, adjunct or part-time. Its visitors include Howard Adelman, David Archard, Richard Arneson, Bernard Gertz, Frances Kamm, Elizabeth Kiss, Arthur Kuflik, Loren Lomasky, David Rodin, John Tasioulas, and Michael Walzer. The staff associated with CAPPE in that period (many of whom continue with it) include Andrew Alexandra, Tom Campbell, Tony Coady, James Griffin, Marilyn Friedman, Jeanette Kennett, John Kleinig, Neil Levy, Larry May, Thomas Pogge, Igor Primoratz, Doris Schroeder, Peter Singer, Chin Liew Ten, Janna Thompson, and Suzanne Uniacke. Beyond research publications, CAPPE has been involved in numerous ethics consultations of various kinds, and has also been active in engaging the public in its work through publishing in non-academic journals and giving talks, interviews and so on for radio, television and online media.

Since the ‘reforms’ to higher education in Australia in the late 1980s, begun by the Labor Party’s federal education minister, John Dawkins, and continued by subsequent governments, the survival of university teaching and research is now predominantly dependent on how individuals and units can raise money by their own efforts. Beyond student fees, the strain of raising money from industry, donors, research bodies and the public sector has become an enduring preoccupation of academics and university decision-makers. Applied philosophers have better prospects for such funding than their ‘pure’ colleagues, but even so philosophy of any sort is bound to be low on the priorities of corporate sponsors. Inevitably, funding of initiatives like CAPPE, once direct ARC support finished, is a tenuous affair, though it must be said that CAPPE has had impressive successes in attracting ‘outside money’. Universities expect long-established departments or schools to find outside money to support their work, especially in research, so new ventures are under even greater pressure to attract such finance. All three universities have agreed to continue a level of funding that is welcome but minimal, and at the time of writing there have been staff reductions, especially at Melbourne. ARC Linkage grants and other collaborative efforts with non-academic bodies provide one of the key prospects for funding salaries and administrative support, but they have their problems. A major one is that heavy dependence upon important but basically service activities (such as researching integrity systems for a police force or drawing up a code of ethics for a professional group) inevitably makes inroads upon the task of more central philosophical research and creativity that must remain the core business of CAPPE and should inform its service functions. How CAPPE copes with this into the future will be a major challenge.

Charles Sturt University

John Weckert

Charles Sturt University (CSU) was established in 1990 from an amalgamation of the Riverina-Murray Institute of Higher Education (RMIHE) located at Wagga Wagga and Albury, and Mitchell College of Advanced Education at Bathurst (MCAE). There was some philosophy taught at RMIHE, including business ethics and computer ethics as part of professional courses in business and information technology respectively, but it was with the appointment of Seumas Miller as professor of social philosophy in 1994 that the discipline began to develop rapidly. New philosophy appointments were made in the following years, a philosophy major was established in the B.A. degree and there was considerable expansion of offerings in applied ethics, most notably in police ethics.

In late 1999 CSU philosophy received its second significant boost with the awarding of an Australian Research Council Special Research Centre for Applied Philosophy and Public Ethics (CAPPE). To date, CAPPE is one of only two Special Research Centres to be funded in the humanities. The new Centre was a joint project between CSU and the University of Melbourne, with CSU being the host institution and the director, Professor Miller, being a CSU employee. In 2004 the Australian National University (ANU) also became part of CAPPE after the CSU node of CAPPE had been based on the ANU campus from 2001. With the establishment of CAPPE more philosophers, particularly in applied ethics, were employed at CSU, and it quickly gained a significant international reputation in this field. In recent years a number of important appointments have been made jointly by CSU-CAPPE with overseas universities, currently Oxford University, John Jay College of City University, New York, and Washington University in St. Louis. Recent significant appointments include Professor John Kleinig, the leading international expert on criminal justice ethics, and Professor Larry May, one of the foremost international researchers in global justice. As of 2008, CSU employs eighteen philosophers full-time or part-time, seven at professorial level and all at level B or above. Most of the appointments are, or include, half time research positions with CSU-CAPPE.

A notable feature of philosophy at CSU has been the emphasis on applied ethics and the number of consultancies and Australian Research Council Linkage Grants gained by CSU philosophers.

The postgraduate program is growing, both in the coursework Masters area and in the number of Ph.D. students. Currently six Ph.D. students are enrolled and three have graduated in the last few years.

Classical Logic

Greg Restall

Philosophical logic in Australasia is much more famous for innovation in modal logic and other areas of non-classical logic than in core, traditional classical logic. For accounts of those themes in research in Australasian philosophy, the reader is referred to the entries on modal logic, non-classical logic and Relevant logic. There is some work in philosophical logic that remains to be covered, and that is the subject for this article. It is fitting that in a region most famous for non-classical logic, the entry on classical logic should focus on work that is not non-classical.

Classical logic here is understood as traditional two-valued propositional logic and its extensions with quantifiers, as introduced by Frege, Russell and Whitehead, and which became dominant in logic teaching and research throughout the world from the middle of the twentieth century to this day. Classical propositional logic can be taught in many ways, with truth-tables, or with a proof technique such as ‘natural deduction’, and in most philosophy departments throughout Australia and New Zealand logic teaching forms a part of the first or second year program for philosophy students, whether as a compulsory unit or as one of a few options. The teaching of classical logic in Australasia has been a distinctive feature of the philosophy curriculum there, and so it is with this topic that we will start. From there, we will look at two prominent issues in research in philosophical logic in Australasia that also count as not non-classical logic.

Logic Teaching in Australasia

Logic teaching throughout Australia and New Zealand has played an important part in the activities of researchers in philosophy departments. For a significant number, it has not been an adjunct to research, but a core activity. From the Masters Program in Logic established by Malcolm Rennie, Len Goddard and Richard Routley (later Richard Sylvan) at the University of New England in the 1960s, to new textbooks replacing the teaching of Aristotelian logic with classical logic, by Charles Hamblin at the University of New South Wales, and Malcolm Rennie and Roderic Girle at the University of Queensland, logic teaching modernised significantly through the 1960s and ’70s (see Hamblin 1967, and Rennie and Girle 1973). Girle’s work, in particular, saw logic become a part of the high school curriculum in Queensland, and through his work, the Australian Logic Teachers’ Journal was founded in 1976, and lasted through ten years of publication—a vital resource for logic teachers in secondary and tertiary education throughout Australia and beyond.

Girle’s pedagogy was radical: he taught logic using Raymond Smullyan’s tableaux (tree) technique, a proof system which is mechanical enough for students of many levels to be able to master, yet also with pleasing formal properties that make technical results in logic (soundness and completeness, decidability, etc.) straightforward to explain (Girle 2002). This technique has seen broad adoption throughout the region, and Girle has made use of it in teaching classical logic to many generations of students, at the University of Queensland and now University of Auckland.

Introducing logic by way of tableaux has moved beyond Australia, to be adopted in many centres around the world, and the technique has been extended far beyond classical logic to be the centrepiece of Graham Priest’s widely used Introduction to Non-Classical Logic (2001) as well.

Classical Logic and Language

Charles Hamblin, mentioned above, was a logician, trained at the University of Melbourne and the London School of Economics, and returned to Australia, becoming the professor of philosophy at the University of New South Wales, where he worked until his death in 1985. His research in logic was instrumental in the development of research in computer science in Australia: in the 1950s he developed a programming language (based on ‘Polish’ Notation, familiar from work in logic in this period) for the third computer available in Australia. He also worked on the logic of imperatives, the categorisation of fallacies, and on formal treatments of the rules of dialogue (Hamblin 1971), where the formal rules of classical logic play just one part in a larger system of rules for asking questions, giving answers, etc. This work has been taken up by other Australasians, such as Jim Mackenzie and Rod Girle. Hamblin, furthermore, provided a close analysis of classical logic itself: for example, his under-appreciated paper on a fragment of classical logic (Hamblin 1973) presages more recent work on tractable languages, and translations between natural and formal languages.

Type Theory

Another strand of work in classical logic in Australasia is found in a tradition of work on type theory. Type theory, which dates back at least to Russell and Whitehead’s Principia Mathematica, goes beyond classical predicate logic by adding higher domains of quantification: not only can we talk about all things (the domain at level 1), but also all collections of things (the domain at level 2) and collections of collections of things (the domain at level 3), etc. Russell and Whitehead’s original theory was a ‘simple’ type theory of this kind. Russell’s later work showed that one might need to complicate the picture: perhaps talk of higher categories of things facilitated the description of more things lower down in the hierarchy, and if we think of these stages as stages of construction, then perhaps we need to keep track of this. Perhaps the hierarchy has to be ‘ramified’ and when we talk of all of the many different collections of things, we need to be aware of whether we need to specify in advance which collections we mean before we can construct things based on them: a definition or construction satisfying this kind of restriction is said to be predicative. Allen Hazen (of the University of Melbourne) has done a great deal of research on predicative logic (Hazen 1983) and Russell’s ramified type theory (Hazen 2004).

Work in type theory has not merely been of historical or purely theoretical interest in studies in formal logic itself. Work in classical type theory has been applied to other areas of logic, most prominently in Australasian work on the logic of adverbs.

Adverbial Modification: With Types and without

Malcolm Rennie, also mentioned above, wrote a short monograph on applications of type theory in natural languages (Rennie 1974). One of those applications is in the logic of adverbs. We may say that Sam sliced a bagel, and when we do this, we use the name ‘Sam’ and the predicates ‘bagel’ (to distinguish those things that are bagels and those things that are not) and ‘sliced’ (to distinguish those pairs of things where the first sliced the second from those where the first didn’t slice the second). To say that Sam sliced a bagel is to say that there is some thing that is both a bagel, and is such that Sam sliced it. We can also say that Sam carefully sliced the bagel. Here ‘carefully’ is not a predicate in the same way: neither Sam was carefully (though, he perhaps was careful), nor was the bagel. ‘Carefully’ in that sentence modifies the predicate ‘sliced’. It is one thing to slice, and another to carefully slice. Rennie’s work on type theory classified the logic of different kinds of adverbs. Merely finding a place in a hierarchy of types is not the end of the story: different kinds of adverbs combine in different kinds of ways. Gunsynd is a champion and a miler and a racehorse: it follows, for example, that he is a champion racehorse, but it does not follow that he is a champion miler. (Perhaps he is a champion over another distance.) In Rennie’s work these kinds of differences are classified and an account is given of how we are to understand them.

This type theoretical account of adverbs is taken up in a larger setting in the work of Maxwell J. Cresswell, chief among many (see, e.g. Cresswell 1985a). In this expanded context of work in logic and linguistics, the resulting typed logic is richer than Rennie’s original setting, as modal and contextual features play a vital role. Not only are there types for objects at level 0, and truth values, and constructions out of them at different levels, but also for possible worlds and other indices of evaluation such as speakers and times. In this work, type theory meets modal logic to form a mainstream tradition in formal semantics in the work of Montague, Cresswell, Partee and others.

This approach to the logic of adverbs and other predicate modifiers was not the only one pursued in Australasian logic in the second-half of the twentieth century. Barry Taylor’s work took things in another direction, in which the towering conceptual structure of a never-ending hierarchy of types is traded in for a slight increase in ontology (Taylor 1985). Instead of thinking of the predicate modifier ‘carefully’ as a predicate of a higher type, we can think of it as an everyday predicate describing items in the world in the same manner as do the predicates ‘bagel’ or ‘sliced’. The trick is to admit that the items are different. What is careful is not the bagel, or perhaps not even Sam, but the event of the slicing. Taylor argues that one should follow the work of Donald Davidson, and hold that when we say that Sam carefully sliced a bagel, we say that there is an event which is a slicing of a bagel by Sam, and which is careful. In this way, the adverb becomes a predicate describing an item, where the item is now a concrete event. Taylor’s work extends Davidson’s logic of adverbs by giving a rich account of the structure of events as a species of the wider genus of states of affairs. The result is a picture of adverbial modification that allows one to avoid a hierarchy of types to stay within the realm of first-order classical logic, at the modest cost of an ontology of events and states of affairs.

Australasian work on predicate modifiers in a rich logical setting has not ended with the work of Rennie, Cresswell and Taylor. Recent approaches to the topic by Lloyd Humberstone (of Monash University) have shown that there are many insights remaining to be mined in this area (Humberstone 2008).

Into the Future

There is no doubt that classical logic will play an important role in philosophy teaching and research in philosophy in Australasia into the future. It is hard to say what shape that might take. One hint of where this may go is in some recent work of the author (Restall 2005), which presents a new defence of classical logic, connecting it to other themes in the norms and pragmatics of assertion and denial.

Clinical Ethics

David Neil

Broadly speaking, ‘clinical ethics’ refers to ethical issues arising in relation to the medical treatment of patients. Clinical practice is an area where ethical and legal theory are closely connected and the main topics one would find in a textbook of medical law parallel the central concerns of the literature on clinical ethics: consent to medical procedures, the duty to provide information (particularly information salient to consent), risk, privacy and confidentiality. The most difficult and contentious areas of both clinical ethics and medical law are at the boundaries of life and the boundaries of autonomy. At the beginning of human life we confront the issues of abortion, child destruction (the destruction of a foetus capable of being born alive) and the withholding of treatment for severely ill or disabled newborns. Ethical or legal problems at the end of life concern the justification for withdrawal of treatment, euthanasia and the concept of brain death. The key questions here are: under what circumstances and by what means may death be hastened and what should be the criteria for a legal determination of death? Issues at the boundaries of autonomy concern the ethical and legal status of proxy consent for non-autonomous patients, the justification of paternalism, and the moral and legal force of advance directives.

Distinct ethical issues arise in virtually every area of clinical practice and consequently a wide range of philosophical theory finds practical application in the domain of clinical ethics. For instance, debates around the killing of embryos and foetuses, the concept of brain death and the status of severely brain damaged patients are necessarily concerned with metaphysical problems: personal identity, defining the beginning and ending of a distinct human life, and the relationship of the person to the body. Theories of distributive justice bear on judgments about resource allocation and decisions about which treatments will be made available to which patients. The notion of patient autonomy, which has acquired a central place in clinical ethics, must ultimately appeal to a philosophical account of the meaning and value of autonomy. New kinds of medical information, such as genetic information, raise novel problems for theories of privacy and property. And, in general, substantive normative claims in clinical ethics are typically grounded in a normative ethical theory, such as utilitarianism. One approach to normative theorising that is indigenous to medical ethics is known as ‘principlism’ or ‘the four principles approach’, developed by Beauchamp and Childress (2008). The four principles approach requires that clinical dilemmas be assessed in terms of the principles of respect for autonomy, beneficence, non-maleficence and justice, and that a chosen course of action should aim to ‘balance’ respect for those principles. This approach to ethical analysis, however, does not provide any substantive method for resolving conflicts between those principles in concrete cases and has been criticised as reducing to a form of ethical intuitionism.

Historically, clinicians’ duties and patients’ rights are founded in the common law. However, medicine is increasingly a focus of legislative attention and one sign of the growing level of social concern with issues of clinical ethics is the amount of new Commonwealth and State statutory law regulating clinical practice. Recent changes in medical law include:

  • legislation on advance treatment directives and the powers of patient appointed surrogate decision makers: Medical Treatment Act 1988 (Vic), Guardianship Act 1987 (NSW); Consent to Medical and Palliative Care Act 1995 (SA); Powers of Attorney Act 1998 (Qld); Medical Treatment (Health Directions) Act 2006 (ACT); Natural Death Act 1988 (NT); The Acts Amendment (Advance Health Care Planning) Bill 2006 (WA)
  • legislation regulating assisted reproduction: Infertility (Medical Procedures) Act 1984 (Vic); Reproductive Technology Act 1987 (SA)
  • legislation controlling cloning: Prohibition of Human Cloning for Reproduction Act 2002 (Cwlth); Regulation of Human Embryo Research Amendment Act 2006 (Cwlth)
  • legislation requiring medical practitioners to give patients sufficient information for informed choice in accepting or refusing treatment: The Health Services Act 1988 (Vic) section 9(e) (other jurisdictions have enacted similar provisions)
  • legislation regarding privacy in health care: In 2001 provisions were added to the Privacy Act 1988 (Cwlth) setting out 10 ‘National Privacy Principles’ which apply to health service providers
  • legislation limiting civil liability in negligence claims: Wrongs Act 1958 (Vic); Civil Liability Act 2002 (NSW); Civil Liability Act 2003 (Qld); Civil Liability Act 1936 (SA); Civil Liability Act 2002 (WA); Civil Law (Wrongs) Act 2002 (ACT); Civil Liability Act 2002 (Tas); Personal Injuries (Liabilities and Damages) Act 2003 (NT).

In addition to legislative means, the development and enforcement of standards in clinical ethics has seen the publication of a number of guidelines, statements of patients’ rights and professional codes of ethics. For instance, the National Health and Medical Research Council published the General Guidelines for Medical Practitioners on Providing Information to Patients (2004) and the Australian Commission on Safety and Quality in Health Care has developed the Australian Charter of Healthcare Rights (endorsed by Australian Health Ministers in July 2008). Although not law themselves, such documents may be referred to by the courts in interpreting the common law. Similarly, Medical Practitioners Boards may refer to codes of ethics in disciplinary procedures against clinicians charged with breaching their ethical duties.

The General Guidelines for Medical Practitioners on Providing Information to Patients were developed on the basis of a joint inquiry by the Australian, New South Wales and Victorian Law Reform Commissions and a resulting report in 1989, entitled Informed Decision Making About Medical Procedures. Empirical studies conducted by the Victorian Law Reform Commission in 1986–87 revealed that doctors were giving patients less information than they wanted. The report concluded that there existed a pervasive attitude among doctors that it was in patients’ best interests for doctors to decide what information should be given and what treatments their patients should undergo (NHMRC 2004). The struggle to change the historically paternalist culture of medicine and to make respect for patient autonomy a primary duty of clinicians has been at the heart of the development of both clinical ethics and medical law in Australia.

This trend towards both specification and enforcement of ethical standards in clinical practice can be attributed to several factors, including: technological advances in medicine (which have both raised public expectations of medical treatment and generated debate around controversial procedures); greater general awareness of patients’ rights; and the growth of the disciplines of bioethics and medical law. Clinical ethics is a major subdivision of the discipline of bioethics and its importance for Australasian philosophy is most clearly evidenced in the appearance of a number of institutional centres specialising in research and teaching in clinical ethics. The first Australian research centre in bioethics was the Monash Centre for Human Bioethics, which was established by Peter Singer and Helga Kuhse in 1980 (Oakley 2006). Currently, centres engaged in clinical ethics research include:

Australian academic journals that publish research in clinical ethics include: Monash Bioethics Review, Journal of Bioethical Inquiry, and Medical Journal of Australia.

The subject matter of clinical ethics overlaps with other areas of applied ethics. In particular, clinical ethics overlaps substantially with human research ethics, not least because most medical research subjects are also patients. This is important because developments in research ethics have exercised a considerable influence on the development of clinical ethics as a discipline. In Australia, as in most Western countries, the establishment of formal mechanisms for the ethical oversight of medical research began in the 1960s and were originally based on the Declaration of Helsinki, adopted by the World Medical Association in 1964. The peak body for medical ethics in Australia is the Australian Health Ethics Committee (AHEC). It was established in by the National Health and Medical Research Council Act 1992 and its primary functions are to advise the National Health and Medical Research Council (NHMRC) on ethical issues relating to health and to develop guidelines for human research. The NHMRC Act specifies the composition of AHEC, whose twelve members must include a person with expertise in philosophy and a person with expertise in medical ethics.

In Australia, proposed research projects involving human research must first be reviewed and authorised by a Human Research Ethics Committee (HREC), which assesses experimental protocols against the requirements of the National Statement on Ethical Conduct in Research involving Humans (NHMRC 2007). Clinical ethics, by contrast, is concerned with situations arising in the course of medical treatment, which are often unanticipated. Yet many of the issues that are central to research ethics, such as informed consent, are equally central for clinical ethics, and the conceptual and institutional developments in research ethics over the last forty years have had a considerable effect on how the same issues are approached in clinical ethics. For instance, current standards in informed consent procedures for patients undergoing treatment in hospitals owe a great deal to the scrutiny of informed consent by evaluating research proposals and by the evolution of research codes of practice.

While human research is now subject to a legally enforced national system of ethical evaluation, there are currently no standardised ethical review procedures for clinical decision-making. However, recent years have seen the establishment of clinical ethics committees in some hospitals, whose role is to provide advice and support to doctors, patients and their families faced with difficult medical dilemmas. Examples of the kind of dilemmas that clinical ethics committees address include withdrawal and withholding of treatment, refusal of treatment, advance directives, high-risk treatments, contested resource allocation and late-term abortions. They may also have a policy formation role and an ethics education role within their institution. The structure, functions and procedures of clinical ethics committees tend to be developed internally in hospitals where such committees are set up, and thus differ from institution to institution. Hospitals are not required to form clinical ethics committees and, although accurate current figures are not available, a study in the early 1990s found that around 16% of hospitals had a clinical ethics committee (McNeill 2001). It has been argued that clinical ethics consultation has a valuable role to play in hospitals, and even when ethics consultation does not produce consensus it serves an important function in communicating what is or is not accepted practice, in improving transparency and in avoiding unacceptable decisions (Gill et al. 2004). The New South Wales Department of Health has created a clinical ethics advisory panel whose role is to advise NSW Health on clinical ethics issues and to provide guidance and support for hospital clinical ethics committees. It remains to be seen whether the goal of developing an institutional framework and national guidelines for clinical ethics consultation will be adopted more broadly in the Australian health care system in the future.

Legislation

The Acts Amendment (Advance Health Care Planning) Bill 2006 (WA)

Civil Law (Wrongs) Act 2002 (ACT)

Civil Liability Act 1936 (SA)

Civil Liability Act 2002 (NSW)

Civil Liability Act 2002 (Tas)

Civil Liability Act 2002 (WA)

Civil Liability Act 2003 (Qld)

Consent to Medical and Palliative Care Act 1995 (SA)

Guardianship Act 1987 (NSW)

The Health Services Act 1988 (Vic)

Infertility (Medical Procedures) Act 1984 (Vic)

Medical Treatment Act 1988 (Vic)

Medical Treatment (Health Directions) Act 2006 (ACT)

National Health and Medical Research Council Act 1992 (Cwlth)

Natural Death Act 1988 (NT)

Personal Injuries (Liabilities and Damages) Act 2003 (NT)

Powers of Attorney Act 1998 (Qld)

Privacy Act 1988 (Cwlth)

Prohibition of Human Cloning for Reproduction Act 2002 (Cwlth)

Regulation of Human Embryo Research Amendment Act 2006 (Cwlth)

Reproductive Technology Act 1987 (SA)

Wrongs Act 1958 (Vic)

Institutional Centres

Cognitive Science

Peter Slezak

Judged by the usual institutional criteria, cognitive science has established itself internationally as a self-conscious scientific research field with academic departments, conferences, journals, societies and textbooks. The Journal of Cognitive Science was first published in 1976, and the U.S. Cognitive Science Society with its annual conference was inaugurated shortly afterwards in 1979. In Australia, the first Centre for Cognitive Science and graduate degree program were established in 1987 at the University of New South Wales, followed by degree programs at the University of Western Australia, Flinders University, University of Queensland, Monash University and La Trobe University. The Macquarie University Centre for Cognitive Science (MACCS) was established in 2000 and the Australian National University (ANU) Centre for Consciousness (located within the Philosophy Program in the ANU Research School of Social Sciences) was set up in 2004.

The inaugural conference of the Australasian Society for Cognitive Science was held at the University of New South Wales in 1990 and, although the formal Society lapsed a year later, the conference series has continued on a regular biennial basis. (University of Melbourne 1993, University of Queensland 1995, University of Newcastle 1999, University of Melbourne 2000, University of Western Australia 2002, University of New South Wales 2003, University of Adelaide 2007, Macquarie University 2009.) Papers from these conferences inaugurated the series, Perspectives on Cognitive Science (Slezak, Caelli and Clark 1995; Wiles and Dartnall 1999), with subsequent volumes oriented towards Australasian philosophy in cognitive science (Clapin, Staines and Slezak 2004; Hetherington 2006).

Pylyshyn’s (1984) landmark Computation and Cognition envisaged cognitive science becoming a scientific domain like biology or geology, based on a proprietary vocabulary and autonomous explanatory principles. The unifying principle would take cognition to be literally a type of computation. In the wake of these developments, in philosophy there was a widespread shift from conceptual analysis to a form of theorising that is not clearly distinct from scientific inquiry. Patricia Churchland (1986: ix) expressed her early impatience with ‘most mainstream philosophy’ and her turn towards a promising ‘new wave in philosophical method’ that ‘began in earnest to reverse the antiscientific bias typical of “linguistic analysis”’. Dennett (1978), too, reflected on this shift from ‘modest illuminations and confusion-cures’ to seeing philosophy of mind as ‘a branch of philosophy of science, that dominates the best work in the field today’, taking an interest in the theories and data of relevant disciplines such as psychology, the neurosciences, artificial intelligence and linguistics.

Identifying Australian philosophical contributions to cognitive science in this vein is confronted by two difficulties. First, the question of who is to count as an Australian philosopher is more or less arbitrary and irrelevant to the content of his or her work. Among the founders of Australian Materialism, both U. T. Place and J. J. C. Smart were of British origin. Place lectured at the University of Adelaide for only four years from 1951 to 1954, where his brain now resides in a jar—perhaps sufficient credentials for qualifying as an Australian philosopher. Many Australians have gained higher degrees at universities overseas, expatriate Australian philosophers may be found in academic positions around the world, and many non-Australians occupy positions in philosophy at Australian universities.

A second difficulty concerns the very conception of philosophy within cognitive science. To be sure, as Fodor (1998) has noted, some philosophers still tend towards aprioristic or ‘ordinary language’ views that accord philosophy a primacy, even defining scientific inquiry. For example, Woodfield (1982: ix) suggests that ‘the whole subject is built upon a realisation that philosophers can contribute more by investigating discourse about mental states than by investigating mental states themselves’. Despite the shift in concerns with the so-called ‘cognitive revolution’, it would be invidious to make a strict distinction between traditional philosophy of mind and cognitive science. For example, D. M. Armstrong’s (1968) seminal contribution to the materialist conception of mind antedates cognitive science but is undoubtedly a precursor and foundation of these developments in philosophy.

More recently, David Chalmers’ (1996) influential work has ranged across AI, meaning and modality, though his work on the ‘Hard Problem’ of consciousness is less concerned with theories or empirical work in cognitive science and his method of conceivability may be found in Descartes’ argument for dualism. Other Australian philosophy of mind has also been more or less independent of research in empirical disciplines of cognitive science (e.g. Stoljar 2001, 2005, 2006; Ismael 2007). Jackson’s (1982) famous paper on epiphenomenal qualia and the Mary ‘knowledge argument’ deals with venerable philosophical worries (see Stoljar and Nagasawa 2004). Qualia, folk theories and meaning have also been discussed by Braddon-Mitchell (2003), and even the notorious Gettier Problem may be discerned at the heart of questions concerning mental representation and externalist conceptions of meaning (Hetherington 2007).

Empirical research in cognitive science often rehearses traditional philosophical puzzles in a new guise, leading Fodor (1994) to quip that ‘cognitive science is where philosophy goes when it dies’. Notable among persistent philosophical problems has been the ‘Imagery Debate’ (Kosslyn 1994; Pylyshyn 2003), described by Block (1981) as among the ‘hottest topics in cognitive science’, even though it was familiar to Descartes. ‘Crucial experiments’ designed to refute the ‘pictorial’ account (Slezak 1992, 1995) in favour of what has been pejoratively termed Pylyshyn’s ‘philosophical’ theory have not resolved the debate.

The history of science and philosophy must be included within the scope of our topic, as Sutton (1998) demonstrates in his Philosophy and Memory Traces, significantly subtitled Descartes to Connectionism. Sutton’s work has extended across historical antecedents of cognitive science. (See also Gaukroger, Schuster and Sutton eds 2002; and Slezak 2006.)

The formalisms of Chomsky’s generative linguistics have given rise to persistent philosophical controversy about the enterprise and especially about the ‘psychological reality’ of grammars (D’Agostino 1986; Devitt and Sterelny 1987, 1989; Stone and Davies 2002; Devitt 2006; Slezak 1981, 2009). Chomsky’s theories also gave new impetus to the traditional philosophical debate concerning ‘innate ideas’ and the ‘blank slate’. Chomsky’s central argument in favour of a Universal Grammar cites the ‘poverty of the stimulus’—the claim that we know so much based on so little evidence—but remains widely disputed, notably by Cowie (1997, 1998).

The representational theory of mind has been discussed by Sterelny (1990), Clapin (2002), Slezak (2002) and Godfrey-Smith, who has also written on folk psychology (2004a, 2005a), evolution of cognition (1996, 2005b) and functionalism (2009b).

Classical, symbolic approaches to computing, Alan Turing and artificial intelligence have been discussed by Copeland (1993, 2004); Gödelian arguments against artificial intelligence by Slezak (1982), and computation and creativity by Dartnall (2002).

Connectionist models have been discussed by Davies (1991) and van Gelder (1990, 1992), and neurocomputational models of cognition and consciousness by O’Brien and Opie (1999, 2002, 2006). Computational views of cognition taken to include both classical symbolic and connectionist models have been challenged by dynamical systems, conceived as having numerically describable states that evolve over time rather than representational states governed by rules (van Gelder 1995; Port and van Gelder 1995).

Australasian philosophy oriented towards experiments and theories in cognitive science includes work on delusions and neuropsychology (Bayne 2008; Bayne and Fernandez 2008; Davies et al. 2001, 2003); on functions, content and biological approaches to mental representation (Neander 1991a, 1995a, 2006); on emotions (Griffiths 1997; Griffiths and Scarantino 2008), and on developmental, evolutionary psychology (Griffiths 2007, 2008; Sterelny 2003; Sterelny and Fitness eds 2003). Other empirical topics include memory (Sutton 2004, 2007, 2008a, 2008b), distributed cognition (Sutton 2006) and dreaming (Sutton 2009), mental states, representation (Khlentzos 2004, 2007), mental causation, folk theory and experience (Menzies, 2003, 2007b, 2008).

Conditionals

Stephen Barker

As Callimachus wrote, ‘Even the crows on the roofs caw about the nature of conditionals’. Antipodeans have been particularly crow-like in their intense cawing about if, producing thereby some quality philosophy. Linguistically, conditionals are expressed, paradigmatically, by if-sentences. Not every if-sentence is deemed as expressing a conditional. For example, Austin’s biscuit conditionals might be denied such status despite their names: if you want some, there are biscuits in the sideboard, offers no logical relation between antecedent and consequent. Some conditionals are expressed without if: No bomb no war or Were she to go, there would be a battle. The study of conditionals is an interplay between purely semantic and syntactic considerations.

We can discern families of conditionals: singular as opposed to general, and, within the class of singular, indicatives as opposed to counterfactuals. People usually have in mind declaratives: the non-delaratives, if-imperatives or interrogatives are not often considered. General if-sentences are occasionally considered. Few attempts are made to offer unified treatments, across these categories, let alone a unified treatment of if. But this tendency to the piecemeal is not restricted to southern shores. My review respects the boundaries in its examination of antipodean meditations upon the curiosities of if.

Indicative/Counterfactual/Subjunctive Taxonomy

A topic of debate about conditions in which the cawing reached fever-pitched heights was waged, largely in the 1990s, about taxonomy. Let’s begin there. Consider the three if-sentences:

  1. Oswald didn’t shoot Kennedy, someone else did.
  2. If Oswald had not shot Kennedy, someone else would have.
  3. If Oswald does not shoot Kennedy, someone else will.

The original idea (from non-Antipodean Adams (1975) is that (1) is a different semantic class from (2): the first is an indicative, the second a counterfactual. The second, rejecting a conspiracy view about Oswald’s actions, is true, but (2) false. The indicative is assertable merely on the belief that someone shot Kennedy, if this belief is in no way dependent on the belief that Oswald was the assassin. What then of (3)? Tradition places (3) with (1), appealing to the supposed sameness of indicative mood. Vic Dudman in a series of papers (including 1984, 1991, 1991a, 1994, 2000) places (3) with (2), disputing the whole idea of indicative mood, and offering an alternative analysis of tense and time in conditionals. (Gibbard’s (1970) work needs recognition here.) Dudman’s work goes beyond issues of classification, offering novel insights into matters of deep structure. Those who have joined the fray, both in direct and indirect ways are Frank Jackson (1987), Bennett (1988, 1995), and Barker (1996). See also Ellis (1984), building on Ellis (1979), and Jackson (1984a) for a separate debate about taxonomy.

Indicatives

Deep concern about the material implication analysis of indicatives has spawned an array of theories about indicatives. J. L. Mackie (1973) proposes a conditional assertion theory view that an indicative if P, Q involves an assertion of Q in the scope of a supposition of P. Perhaps the best know work by an Australian is Jackson’s Conditionals (1987). This work mainly discusses indicatives—it embraces David Lewis’ (1973) possible worlds treatment of subjunctives. Jackson holds that the material implication analysis of conditionals is an adequate treatment of truth-conditions for indicatives. But the solution for him is to add more, not to take away. For Jackson, if P, Q is true if and only if (P É Q). But indicative if P, Q carries a further meaning: a conventional implicature about the speaker’s subjective probability state, which boils down to the conditional probability of Q given P—Pr(Q/P)—being high. That enables Jackson to explain what he takes to be the data about assertability of indicatives—Adams’ Thesis:

 

Adams’ Thesis: if P, Q is assertable to the degree that Pr(Q/P) is high.

 

Jackson thinks that a Gricean conversational implicature analysis cannot explain the data. This is disputed by Barker (1997). That Adams’ Thesis captures the data is disputed by Dudman (1992).

Looming large in the discussion of indicatives are the triviality results of Lewis (1976). These assume Adams’ Thesis. Triviality arises under the assumption that:

 

Stalnaker’s Thesis: Pr(if P, Q) = Prob(Q/P)

 

Given Adams and Stalnaker we get a contradiction. These results have been developed by Hayek (1989). Jackson’s approach avoids the problem since he denies Stalnaker’s Thesis: the probability of if P, Q is that of (P É Q).

Barker (1995) offers a theory of conditional assertion, but a non-standard account that extends to non-declaratives, which is developed further in (2004). McDermott (1996) offers a conditional assertion theory of indicatives, related to ideas developed by Belnap.

Nolan (2003) returns to a possible worlds defence of indicatives. Weatherson (2001) uses possible worlds with a two-dimensional modal logic to explain both indicatives and counterfactuals. He changes his mind in (2009), offering indexical relativism to explain open indicatives.

Australia has taken conditionals into the realms of alternative logics: see Routley (1982) and Priest (1987). Ellis (1979) uses the dynamics of rational belief-change to explain both indicatives and subjunctives.

Counterfactuals and Subjunctives

The analysis of counterfactuals—often treated as subjunctives—is dominated by the conception laid down by almost antipodean Lewis (1973, 1978) and by Stalnaker (1968). This is the idea that a counterfactual is true if and only if the nearest P-worlds are Q-worlds. Nolan (2005) gives a good introduction to the framework and the broader context of Lewis’ philosophy. This is opposed to the metalinguistic approach according to which a counterfactual is true if and only if P and cotenable premises entail Q or probably Q, and so on.

Most work by Australasians buy into Lewis’ proposal—some reject it. Of those who accept, they are still keen to modify. Jackson (1977a) argues that overall similarity cannot be right: similarity must be restricted temporally. So does Bennett (1984). McDermott (1999) presents an interesting idea about worlds and middle knowledge. The framework assumes primitive access points. Huw Price (1996) argues that the asymmetry is perspectival, and is intimately connected to the temporal orientation of agency.

The conditionals self-styled maverick Dudman (1994) has much to say against the possible worlds approach, doubting the analysis of antecedents and consequents upon which it is based—see Barker (1996) and Dudman (1996) for a response. Braddon-Mitchell (2001) invokes a slight modification of Lewis’ similarity semantics in terms of lossy laws to deal with miracle-semantics. Barker (forthcoming) argues that the whole possible worlds approach is a mistake.

These modifications relate to an ongoing issue about counterfactuals in indeterministic contexts. The problem of so called Morgenbesser betting counterfactuals—assertions of If I had bet on heads, I would have won, where an indeterministic coin has landed heads—is examined by Bennett (1984, 2004), Barker (1998, 1999) and Pavel Tichý (1976a).

Barker (1999) offers a metalinguistic approach that deals with probabilistic counterfactuals, arguing that possible worlds approaches cannot deal with these. Hájek (unpublished) argues that most would-counterfactuals are false, whether in indeterministic or deterministic contexts, due to the conflict with might-not-counterfactuals and would-counterfactuals.

Other Kinds of ‘If’, and ‘If’s and Other

General ‘if’ is remarkably neglected. See Barker (1997, 2004). A theory of only if and the constraints a theory places on theories of if is given in Barker (1993). Even if is examined in Bennett (1982, 2004), Hazen and Slote (1989), Jackson (1987), and Barker (1994). A theory of non-declarative ifs and so-called ‘biscuit conditionals’ (as dubbed by Austin) is given in Barker (1995).

Conditionals and Broader Issues

There are broader philosophical issues that crowed about in relation to conditions, in particular counterfactuals in relation to causation, disposition, law, and time. Martin (1994) attacks the conditional analysis of dispositions. Price (1996) addresses the issue of time and counterfactual dependence. Handfield (2001) addresses the issue of counterfactuals under the assumption of determinism, and counterlegals in (2004) and dispositional monism. A seeming paradox given Moorean paradoxicality and the belief revision model of conditionals is offered by Chalmers and Hájek (2007). An in-depth discussion of conditionals in relation to probability theory, game theory and decision theory is given by Hájek (2002).

Consciousness

Jon Opie

Understanding consciousness and its place in the natural world is one of the principal targets of contemporary philosophy of mind. Australian philosophers made seminal contributions to this project during the twentieth century which continue to shape the way philosophers and scientists think about the conceptual, metaphysical and empirical aspects of the problem. After some scene setting, I will discuss the main players and their work in the context of broader developments in the philosophy of mind.

Towards the end of the nineteenth century, scientific psychology set itself the task of systematically exploring the mind, understood as the conscious activity that accompanies perception and thought. Labs in Germany and the U.S. began the tedious work of determining the structure of experience via the reports of trained subjects operating under carefully controlled stimulus conditions. The hope was that the phenomena revealed by this means might eventually be correlated with activity in the central nervous system.

Many philosophers considered this project misguided. The logical positivists, who insisted that a statement is only meaningful if one can specify observable conditions that would render it true or false, rejected the view that psychological predicates such as ‘pain’ have any subjective content. A statement like ‘Paul has a toothache’ is merely an abbreviation for a list of physical events (such as Paul weeping, Paul’s blood pressure rising, etc.) which collectively exhaust the meaning of the statement (Hempel 1980).

Ryle (1949) and Wittgenstein (1953) regarded the so called ‘mind-body problem’ as the result of a misuse of ordinary language. According to Ryle, it is a ‘category mistake’ (1949: 16) to treat the mind as part of the body, because psychological and physical language follow different rules. The former provides a mentalistic shorthand for characterising behavioural dispositions, but does not pick out the internal causes of behaviour (unlike physiology).

It was in this climate that U. T. Place and J. J. C. Smart, both working at the University of Adelaide, first proposed their pioneering idea that conscious states and processes are none other than states and processes of the brain: the so-called ‘identity theory’. In philosophy circles this was widely regarded as an outlandish proposal. One English philosopher is said to have reacted: ‘A touch of the sun, I suppose’ (reported in Armstrong 1993: xiii). Place, who first proposed the theory, thought the dispositional analysis of mental concepts such as ‘believing’ and ‘intending’ was sound, but claimed that there is ‘an intractable residue of concepts clustering around the notions of consciousness, experience, sensation and mental imagery, where some kind of inner process story is unavoidable’ (1956: 44). He emphasised that the identity theory is not an analysis of statements about sensations into statements about the brain, but a defeasible scientific hypothesis (ibid.: 45).

Smart, initially a skeptic (see his 2008), soon came to the theory’s defence. Following Place, he compared the identity of sensations and brain processes to the relationship between lightning and electrical discharges. The latter is not a matter of definition; one can understand statements about lightning without any knowledge of electricity. Nor is it a matter of lightning and electrical discharges being contemporaneous and co-spatial (as when two gases are mixed). Rather, it is an ‘identity in the strict sense’ (Smart 1959b: 145). Lightning is nothing more than an electrical discharge. Likewise, the identity theory asserts that sensations are not merely correlated with brain processes, as psychologists had supposed; they are one and the same thing.

To the objection that we attribute different kinds of properties to experiences and brain processes—sensations are private, brain processes are public; sensations can be intense or unpleasant, brain processes cannot—Smart responded that our linguistic conventions are not fixed, but will undoubtedly change with future science. We may one day be able to state objective physiological criteria for the application of expressions such as ‘Smith experiences a strong sweet taste’, and will have no qualms about describing experience in physical terms (ibid.: 152–3).

To the objection that ‘raw feels’ such as the yellow of a lemon, or the sweetness of sugar, are irreducibly mental, Smart offered an account of colour, taste, etc., as ‘powers … to evoke certain kinds of discriminatory responses in human beings’ (ibid.: 149). Such powers belong to the objects we perceive, not to our sensations. Thus, in describing sugar as ‘sweet’ we are not referring to a non-physical quality of a sensation, but to a power of sugar to produce certain effects in us. After-images are a problem here, since they have no object. However, Smart noted that a report such as ‘I see a yellow after-image’ can easily be expressed in a topic-neutral way (i.e. in terms that are neutral between materialism and dualism), for example: ‘Something is going on with me that is like what goes on when I look at a lemon’ (ibid.: 148–50).

An important player in these early developments was C. B. Martin, who was at the University of Adelaide between 1954 and 1966. Although Martin did not publish a great deal at the time, his influence is frequently acknowledged by Place and Smart (see Place 1989 for an account of Martin’s contribution to ‘Is Consciousness a Brain Process?’, and Martin 2007 for an overview of his distinctive approach to dispositions, emergence, and mind).

D. M. Armstrong (1968, 1977) and Brian Medlin (1967) were part of a second wave of Australian identity theorists. They extended the theory by offering a causal analysis of mental states as ‘[states] of the person apt for bringing about a certain sort of behaviour’ or ‘apt for being brought about by a certain sort of stimulus’ (Armstrong 1968: 82). A desire for food, for example, is a state of a person that typically produces food-seeking and food-consuming behaviour; a belief that it is raining is a state that is typically caused by rainfall, and so on. Both Armstrong and Medlin argued that the states which play these roles in us are states of the brain. Their account, known as ‘central-state materialism’ (Feigl 1967), differs from the Place/Smart theory in that it identifies all mental states, not just conscious ones, with brain states.

Consciousness appears in two guises in Armstrong’s work: as introspective awareness, and as the qualities of sensations and mental images. Armstrong regards introspection as analogous to perception, but whereas perception informs us about objects in the physical environment, introspection is a kind of inner sense whereby we acquire information about our own mental states. It produces these special states of awareness via some kind of self-scanning process in the brain (1968: 323–38). As remarked above, perception presents us with the problem of qualia, Locke’s secondary qualities. Like Smart, Armstrong argued that these apparently qualitative features of experience do not belong to conscious states at all, which are ‘transparent’ (1993: xxii). What our perceptual states reveal, when they don’t deceive us, is certain complex micro-physical properties of external objects (1968: 270–90).

The causal analysis of mind, independently worked out by David Lewis (1966), contributed to the development of functionalism (Putnam 1967). Functionalism identifies a mental state with a causal role: the kinds of stimuli that produce it, the kinds of behaviour it produces, and the way it interacts with other mental states. One of the advertised strengths of functionalism is that it imposes very few limits on the nature of the physical states that can play such roles, thus allowing for mentality in organisms, or even machines, that are physically very different from us. Armstrong originally held that a given type of mental state is identical to a corresponding type of brain state (1993: xv). This ‘type-type’ theory is vulnerable to the possibility of mental states that are realised by more than one kind of brain state. Functionalism only insists that a mental state should be realised by some physical state or other—mental states are multiply realisable.

Despite its advantages, functionalism has come in for some serious flack. Frank Jackson, a self-confessed ‘qualia freak’ (1982: 127), is among a number of philosophers who have expressed dissatisfaction with physicalism in both its type-type and functionalist forms. The problem, as Jackson sees it, is that no amount of physical information about the structure, function, or causal history of brain states can capture the phenomenal qualities of experience. Consequently, physicalism must be false. In support of this claim Jackson asks us to imagine a neuroscientist, Mary, who has spent her whole life in a black and white room, her only access to the outside world provided by a black and white monitor. By assumption, Mary knows everything there is to know about the neurophysiology of colour vision, despite never having seen a coloured object. What will happen when she exits her room? Jackson takes it to be obvious that Mary will learn something new about visual experience, and thus that physicalism leaves something out (ibid.: 130). This ‘knowledge argument’ has generated a great deal of critical reaction (see, e.g. Churchland 1985, Lewis 1988). For his part, Jackson no longer buys the conclusion of the argument, on the grounds that it is self-defeating: If Mary learns something new, then qualia are non-physical; but if qualia are non-physical then they are causally inert, and can’t influence our beliefs; so Mary’s new qualia can’t possibly lead her or us to conclude that qualia are non-physical (Braddon-Mitchell and Jackson 1996: 134).

David Chalmers is another Australian philosopher who takes issue with earlier treatments of qualia. Chalmers divides the mystery of consciousness into an ‘easy problem’ and a ‘hard problem’. The easy problem is to explain how the brain processes stimuli, integrates information, and reports on our internal states. The hard problem is the further question: ‘Why is all this processing accompanied by an experienced inner life?’ (1996: xii). Chalmers believes that the hard problem goes beyond what can be explained in terms of the structural and dynamical properties of physical processes, because ‘the existence of my conscious experience is not logically entailed by my functional organisation’ (ibid.: 97). We can see this, he claims, by recognising the conceptual possibility of phenomenal zombies: creatures that are physically and functionally identical to us, but which lack experience. Although phenomenal zombies may not exist in our world, ‘the mere intelligibility of the notion is enough to establish the conclusion’ (ibid.: 96). Chalmers advocates what he calls ‘naturalistic dualism’ according to which conscious experience has phenomenal (or proto-phenomenal) properties that are not entailed by physical properties, but which may be lawfully related to those properties (ibid.: 124–9).

Jackson and Chalmers offer modal arguments for their anti-physicalist positions: they argue from certain possibilities—the existence of phenomenal zombies; that an omniscient scientist might know all the physical facts, yet learn something new via experience—to conclusions about experience. Daniel Stoljar has recently developed a general response to this style of argument. He defends what he calls ‘the ignorance hypothesis’ (2006: 6), the claim that we are ignorant of a type of non-experiential fact that is relevant to the nature of experience. Such ignorance undermines our capacity to imagine the scenarios described by Chalmers and Jackson (ibid.: 67–86). One simply can’t imagine a phenomenal zombie, for example, if one is not in possession of all the relevant facts. One can imagine an organism that lacks experience even though it is identical to us in all the non-experiential respects we know about. But this is no zombie, because our imagining has perforce omitted some of the physical facts (those of which we are ignorant). Although Stoljar doesn’t offer a positive account of such facts, he argues for the plausibility of the ignorance hypothesis on the basis of historical precedent and general observations about our epistemic situation (ibid.: 87–141).

O’Brien and Opie (1999), swimming against the functionalist tide, defend a connectionist approach to consciousness. Their connectionist vehicle theory identifies conscious states with stable patterns of firing in the brain’s many neural networks. Connectionists argue that such firing patterns are a crucial class of representing vehicles in the brain, whose interactions are the causal basis of cognition (Rumelhart, McClelland and the PDP Research Group 1987). O’Brien and Opie’s account thus does justice to both the representational role of consciousness and its causal impact on behaviour. It is an identity theory in the original sense, because it identifies phenomenal consciousness with a particular type of neural activity. The theory explains consciousness not in terms of what the brain’s representing vehicles do, as a functionalist would, but in terms of what they are (1999: 138). The prospects for a vehicle theory of consciousness depend on the ability of disciplines such as cognitive neuroscience and psychophysics to establish a detailed mapping between the rich, multi-layered structure of consciousness, and the equally rich, multi-level organisation of neural activity, as envisaged by early experimental psychologists.

 

(Thanks to Greg O’Hair and Philip Gerrans for a number of helpful comments and suggestions.)

Consciousness, Metaphysics of

Ole Koksvik

Drinking a glass of cold water on a hot day feels a certain way. It is hard not to wonder why it feels that way, and indeed why it feels any way at all. In an influential and evocative way of speaking, a being is conscious just in case there is something it is like to be it (Nagel 1974). Similarly, a mental state is conscious just in case there is something it is like to be in that state. When there is something it is like to be a being, we say that that being has phenomenal experience, and when there is something it is like to be in a certain mental state, we say that that state has phenomenal properties, or a phenomenology. (This notion of consciousness contrasts with access consciousness [Block 1997], with which we shall not be concerned here.)

There are many interesting philosophical questions about phenomenal consciousness. For example: What is the relation between a creature’s being conscious and a mental state’s being conscious (Van Gulick 2009: section 2)? How is the overall phenomenology of a conscious being related to the more specific phenomenal experiences that being has (Bayne and Chalmers 2003)? What is the relationship between (phenomenally) conscious mental states and mental states that represent the world as being a certain way (Siewert 1998: ch. 8; Horgan and Tienson 2002; Chalmers 2004b; Crane 2001: ch. 3; Pitt 2004)? Do all conscious mental states have content, and if so, of what kind (Siegel 2008)? Do we stand in a special relation to our own conscious states, a relation, e.g. of special authority, incorrigibility or infallibility (Macdonald 1995; Gertler 2008; Williamson 2000: ch. 4)?

Australasian philosophers have made important contributions to the philosophical understanding of consciousness in many areas, but there is insufficient room here to discuss them all in appropriate detail. Accordingly, a narrower focus is adopted: this entry shall be exclusively concerned with the philosophical treatment of certain metaphysical questions about consciousness, in the analytic tradition. (Even with this narrower focus, there are inevitably many regrettable omissions and simplifications.) What kind of thing is consciousness, and how does it fit in with the rest of the world? Australasian philosophers’ attempts to answer these questions have been enduringly influential.

The metaphysical questions about consciousness are questions to which one’s initial puzzlement about consciousness can quickly lead. They are also questions which have become absolutely central to the philosophy of mind, both in Australasia and worldwide. Why is that? An important part of the explanation is that apparent progress toward showing how mental states such as beliefs and desires fit into the rest of the world often seems incapable of being generalised to also show how consciousness fits in. Consciousness has thus come to occupy much the same role as was previously occupied by a more general concept of mind: it stubbornly resists explanation. One way to view the positions discussed below is as attempts to address this unsatisfactory situation.

The Identity Theory

The identity theory of the mind was developed at the University of Adelaide by U. T. Place, an English psychologist who was a lecturer there from 1951 to 1954. Place was strongly influenced by discussions with J. J. C. Smart (on whom more below) and C. B. Martin. Martin was an emergentist, not a materialist, but despite differences in views, his influence on the philosophers who interacted with him, in Adelaide (1954–66), Sydney (1966–71) and elsewhere, is widely acknowledged.

Place (1956) argues that a reasonable scientific hypothesis is that the ‘intractable residue’ of conscious experience is identical with processes in the brain. While the metaphysical independence of (kinds of) entities can often be inferred from the logical independence of statements about them, this is not always so, and conscious states and brain processes constitute one of the exceptions. (On this inference, see also Putnam 2002b: 73–4.) In general, commonsense observations and scientific observations should be taken to be observations of the very same phenomenon whenever the latter, together with relevant theory, provide ‘an immediate explanation’ of the former (Place 1956: 48). That is precisely what Place expects to see as our understanding of the brain advances: patterns emerging in the study of brain processes will eventually allow us to explain all our introspective observations.

J. J. C. (‘Jack’) Smart, born in Cambridge and educated at the Universities of Glasgow and Oxford, held a chair at the University of Adelaide from 1950 to 1972. Originally a behaviourist, he was convinced by Place (and also influenced by Feigl 1958) to adopt the identity theory. Smart (1959) argues that we must either understand conscious mental states as ‘nomological danglers’ (the term is due to Feigl) or identify them with brain processes. We have, he says, good reasons to reject nomological danglers but no good reason to reject the identification, so the identification should be accepted. (Smart spends the best part of the paper replying to objections, including objections from considerations to do with ordinary language; with respect to the latter Martin, who was staunchly opposed to ordinary language philosophy, may have been influential.) Nomological danglers are epiphenomena, caused but not themselves causally efficacious. Smart’s objection to such entities seems to be based, first, on a denial of the very possibility of entities connected to the rest of the causal machinery of the world in this way, and second on the view that if such entities did exist, the laws relating them to the rest of the world would be strange, because they would relate microphysical objects with macroscopic phenomena.

The latter thought is presumably motivated by ‘unity of nature’ considerations: we find that the laws that do the real explanatory work elsewhere relate small entities to other small entities, so that is what we should expect where the mind is concerned as well. Whether this adds much independent weight to an already monistic outlook is perhaps doubtful: if, for independent reasons, consciousness is regarded as truly unique, the appearance of unusual laws relating it to other parts of the world should not be surprising.

Another aspect of Smart’s article is worth noting, because it may partially explain why it became so influential, even though it was largely concerned with defending a claim already made by one of his colleagues. U. T. Place had called the sense of identity he employed ‘the “is” of composition’ and had introduced it by means of analogies with cases such as someone’s table being an old packing case and someone’s hat being a bundle of straw (Place 1956: 45). This seems to leave at least some room for a distinction between the experience and the brain process: if four legs plus a tabletop compose a table, the result is usually taken to be six, and not five, distinct objects in total. In contrast, Smart insists that sensations and brain processes are strictly identical (Smart 1959b: 145).

The identity theory is often understood as claiming that when I experience pain, a certain (type of) brain process just is my experience. But other animals do, and extraterrestrial beings may, have brains that differ substantially from ours: they may not have brain processes of the same kind as mine (on a specification of kinds of brain processes narrow enough to yield sufficient variation in conscious states). Yet this does not seem to be a good reason to conclude that they do not feel pain, and it is compatible with having strong reasons to think that they do. If causing tissue damage to alien life forms causes them to retract the damaged body part, if they appear to be strongly opposed to having their tissue damaged and strongly motivated to bring about the cessation of the damaging process and to ensure that it is not repeated, etc., then it seems that we would have very good reason to think that they feel pain, and any knowledge of the aliens’ innards would not affect that. (Note that there is no reliance here on the claim that a certain repertoire of behaviour is all there is to pain.) Thus pain seems likely to be multiply realisable: it can be instantiated in brains quite different from ours, and perhaps even in systems we would not recognise as brains.

Functionalism

Functionalism is, roughly, the view that mental states are states whose identity or character is exhausted by the causal relations they stand in to (sensory) inputs, (behavioural) outputs and other mental states (Putnam 2002b: esp. 76; Levin 2009; Lycan 1994; Block 1994; on the relationship between functionalism and the identity theory, see Jackson 1998a: section 2 and Smart 2008: section 5).

Varieties of functionalism differ with respect to the information considered relevant for the individuation of mental states. On some views, the relevant information is available to everyone (at least all competent adults) in virtue of shared beliefs about how beliefs, desires, etc., respond to stimuli, interact with each other and result in behaviour. On other views, the pertinent information is that which has resulted or will result from scientific psychology (see Block 1994: 325). Australasian philosophy is associated especially with the former type of view, versions of which are defended by, e.g. D. M. Armstrong, David Lewis, David Braddon-Mitchell and Frank Jackson. Here I concentrate on the first two.

‘The concept of a mental state’, Armstrong argues, ‘is the concept of something that is, characteristically, the cause of certain effects and the effect of certain causes’, and this is all there is to our mental state concepts (1981: 21). Characteristic causes of mental states are other mental states as well as events and objects in a person’s environment; a characteristic effect is behaviour.

As is common with philosophical theories, functionalism is often put forward in sketch form: we are told in rough outline how the theory will analyse mental concepts and asked to trust (or to share the intuition) that the details can be filled in. One of Armstrong’s very significant contributions in A Materialist Theory of the Mind (1968) was his attempt to provide an analysis of a range of important mental concepts in considerable detail (see also his 1973 and 1981).

As one might expect, beliefs and desires play a central role. ‘[P]erception’, for example, ‘is nothing but the acquiring of true or false beliefs concerning the current state of the organism’s body and environment’ (1968: 209). A challenge for Armstrong’s position is that we are sometimes subject to known illusions, and in those cases we do not (generally) believe what we see. To account for this, Armstrong states that perception is either the acquisition of beliefs, or the acquisition of degrees of belief (credences) which are ‘held in check’ by stronger credences (1968: 221), or acquisitions of dispositions to believe (1968: 222–3). (See George Pitcher 1971: esp. 91–3 for a very similar account; Jackson 1977b: ch. 2 argues that these manoeuvres fail to salvage the theory.)

The key to see how the causal theory he advocates can encompass conscious experience is, Armstrong argues, to recognise that experience is transparent. (Armstrong is an early advocate of this thesis, which has received much attention of late.) We are not aware of properties of our experiences; what we are aware of are only the properties of the objects of those experiences. For example, in perception, the redness associated with certain perceptual experiences is to be understood as the redness of the perceived object (1981: 27–29). And this allows us, Armstrong thinks, to capture conscious mental states in the causal story: just as with all other mental states, a conscious state is that which has certain characteristic causes, like red objects in the environment (where ‘red’ is cashed out in terms of physical properties, such as surface reflectance profiles).

A similar account was developed independently (and published slightly earlier) by David Lewis. Lewis was American but had a strong association with Australia, due in large part to his close friendship with Smart. He visited Australia more than twenty times from 1971—when Smart had organised for him to give the Gavin David Young Lectures at the University of Adelaide—to 2001, and is now considered an ‘honorary Australian’, at least by Australian philosophers (see Weatherson 2009; Nolan 2005). ‘The definitive characteristic of any (sort of) experience as such’, Lewis argued, ‘is its causal role, its syndrome of most typical causes and effects … which belong by analytic necessity to experiences’ (1966: 17). One of Lewis’ distinctive contributions is his development of a general method for defining theoretical terms, which he then applied to mental state terms to yield an argument for the identification of mental states, including conscious states, with brain processes (see his 1970 for the development, his 1972 for a less technical presentation, and his 1995 for more details; the argument for the identification is in his 1966).

Start with a theory formulated by a long sentence (formed, perhaps, by conjoining the sentences that express the theory), ‘the postulate of T’: T[t1 … tn]. Replace each of the n terms which occur in that sentence with a new variable, x1 to xn, and then existentially quantify the result. This is the Ramsey sentence for the theory. It is silent on how many sets of entities stand to each other in the relations postulated by the theory; it claims only that at least one such set exists. Lewis argues, however, that theoretical terms are best understood as uniquely referring, or else as not referring at all. The introduction of a theory should be understood as claiming that there is exactly one set of entities which satisfy the theory, and so the modified Ramsey sentence, which states that the theory has a unique realisation, is what is of real interest: Image1xT[x] (or ImageyImagex(T[x]≡x=y)).

This much follows, according to Lewis, from our concepts along with conventions for the introduction of new concepts (1972: 254, 1970: 439–40). Lewis argues that if empirical science discovers what actually uniquely realises the theory T, we are compelled, as a matter of logic, to identify the referents of our concepts with these realisers. So if, as he believes, a specification of conscious mental states in functional terms can be extracted from folk psychology, and if, as he thinks highly likely, physical science eventually isolates the unique realisers (or near enough realisers) of those specifications as neural states, then the identification of conscious experience with neural states is forced upon us (Lewis 1966).

Functionalism can be seen as a response both to objections to the identity theory and to objections to behaviourism. The multiple realisability of mental states, which presented a difficulty for the identity theory, seems to be permitted on functionalist theories. For example, pain is the occupant of a certain functional role in the mental organisational structure (or, perhaps, the second-order property of having that role occupied; see Lewis 1995: 419–21). Provided that a functional characterisation can be given which is general enough to encompass all creatures who plausibly feel pain, the unpalatable implication that creatures with brains that differ from ours do not experience pain appears to be avoided. (An implication may be that there is ambiguity in the concept of pain; see Lewis 1983a: esp. 128. However, for an argument which purports to show that functionalism is also vulnerable to the objection from multiple realisability, see Block 1994: 330.)

For at least simple versions of behaviourism, a difficulty is that no single behavioural disposition is associated with a belief: which behaviour a belief will bring about depends on which other beliefs the person has, as well as on the person’s desires (Geach 1957: 8). But according to functionalism, the array of causal relations which individuates a mental state includes relations with sensory input, behavioural output and other mental states. So even if some creatures suppress all pain behaviour (Putnam 2002a) and even if there are perfect actors, who imitate having an experience perfectly, functionalism would not force us to the mistaken conclusion that the creatures lack painful experiences or that the actor has them.

Functionalism seems to retain from behaviourism and the identity theory the virtue of offering a possible way of integrating the mind with the physical world (while also being compatible with their non-integration; see Block 1994: 326, 330): ‘if the concepts of the various sorts of mental state are concepts of that which is … apt for causing certain effects and apt for being the effects of certain causes, then it would be a quite unpuzzling thing if mental states should turn out to be physical states of the brain’ (Armstrong 1981: 21). One might again ask, however, whether that promise really extends to consciousness. Are conscious states individuated exhaustively by their functional roles?

Dualism

Powerful arguments to the effect that they are not are presented by Australian dualists. In his seminal article ‘Epiphenomenal Qualia’ (1982), Frank Jackson argues that facts about phenomenal experience are left out of all explanations restricted only to physical facts, even functional explanations. Through two thought-experiments Jackson presents his ‘knowledge argument’ for the view that physical information must leave something out. In one of them (for the other, see 1982: 130 and 1986) Jackson asks us to imagine that we encounter a person, Fred, who is capable of making a colour discrimination we cannot. For Fred, the things we classify as red fall into two groups, red1 and red2, as different from each other as yellow and blue are to us. Jackson argues that no amount of physical information, including functional information, will tell us what it is like to have the colour experience Fred has when he sees a colour that he, unlike us, can discriminate from the others. Therefore, there is more to know than what is encoded in physical information (1982: 128–30).

The knowledge argument had appeared, although not by that name, some fifty-five years earlier. John William Dunne argued that there is ‘a characteristic of red of which … all seeing people are very strongly aware’, such that a blind person who has been told all the physical facts still ‘would have not the faintest shadow of an idea that [seeing people] experience anything of the kind’ (1934: 15). However, Jackson’s vivid presentation of the argument generated a flurry of activity (see, e.g. Ludlow, Nagasawa and Stoljar 2004), and the fact that he himself no longer endorses the knowledge argument has not stopped the paper from remaining one of the most discussed and influential papers in recent philosophy of mind.

Another influential argument for dualism, often said to have revived the debate over dualism, is David Chalmers’ ‘zombie argument’ (1996: 94–9). Chalmers argues that it is logically possible that there be something—a phenomenal zombie—which replicates my physical makeup and (therefore) my functional organisation down to the most minute detail but which has no phenomenal experience. He claims that the possibility of a zombie twin shows that the facts about my functional organisation and physical makeup do not entail the facts about phenomenal experience. There is more to know than what the physical sciences can tell us. (For predecessors to the zombie argument, see Chalmers 1996: ch. 3, n.1. Chalmers in fact discusses five different arguments, which he regards as all pulling in the same direction.)

These arguments purport to show that what one can learn from one set of facts is not all there is to know. One might think that the metaphysical question about consciousness, whether or not consciousness itself is physical, is not immediately settled even if the knowledge and zombie arguments are successful. In particular, a popular thought has been that it might be metaphysically necessary that consciousness is a brain process even if it is conceptually or logically possible that consciousness is something else. Both Jackson and Chalmers, however, take their arguments to have the strong metaphysical consequence that conscious experience is not identical to physical or functional states or processes (Chalmers 1996: ch. 4, esp. 131–40, and Jackson 1986: 291).

An important predecessor to both these arguments is found in the work of Keith Campbell, who succeeded D. M. Armstrong as Challis Professor of Philosophy at the University of Sydney in 1991. Behaviourist and causal (functional) analyses, Campbell argues, leave out ‘the very thing which matters most about [conscious states]. Pains hurt: indeed that is their most salient feature’ (1984: 71–2), and, in general, these theories cannot capture what it is like to be an experiencing subject (104). Campbell also briefly discusses an ‘imitation man’, a being who, like Chalmers’ zombie, lacks phenomenal experience, but is functionally similar (in Chalmers’s case, there is functional identity) to experiencing subjects (100).

The views on the metaphysics of consciousness discussed in this entry all have contemporary defenders, and are all controversial. A view that has not been discussed is that there really are no phenomenal experiences at all (a view often associated with Dennett 1988, 1991). Whether that view ought to be taken seriously is a question not discussed here. For those who remain convinced that there are conscious experiences, however, the continuing debate over their nature is profoundly influenced by the contributions of Australasian philosophers.

There are many other important Australasian contributions to research on consciousness (here again I apologise for inevitable omissions). Responses to Jackson’s knowledge argument have been put forth by John Bigelow and Robert Pargetter (1990, 2006), Lewis (1990b), Cynthia Macdonald (2004), Philip Pettit (2004b), Denis Robinson (1993), and Jackson himself (2003, 2004a). The thesis that the zombie argument and similar arguments misleadingly seem plausible to us because we lack knowledge of some physical truth has been given detailed defence (Stoljar 2006), and the view that the zombie intuition guides meaning while being metaphysically idle has been explored (Braddon-Mitchell 2003). Phenomenal concepts, the content of phenomenal and intentional states, and our knowledge of that content have been investigated (Chalmers 2004b, 2005, 2003; MacDonald 1995). The lessons to be learned about consciousness from cognitive science is a focus of Australasian research (Hohwy and Frith 2004a, 2004b; Hohwy 2007; O’Brien and Opie 1999, 2000; Shea and Bayne forthcoming), as is the nature and phenomenology of perception (Bayne 2009; Fish 2008, 2009; Schellenberg 2010, forthcoming). Finally, the Centre for Consciousness at the Australian National University continues to be a hub of research and discussion of the many intriguing philosophical questions about consciousness.

 

(I am grateful to John Bigelow, David Chalmers, Daniel Stoljar, Weng Hong Tang and the editors for very helpful comments and discussion. I am, of course, solely responsible for the remaining shortcomings.)

Consequentialism

Simon Keller

Consequentialism says that morality is all about making the world better, or producing good consequences, or bringing about good states of affairs. Alongside deontology and virtue ethics, consequentialism is one of the three major approaches to normative moral theory. The most recognisable and historically significant version of consequentialism is classical utilitarianism, which says that the morally right act is always the one that produces the greatest balance of happiness over unhappiness, with the happiness of all individuals counting equally.

There are various ways in which a moral theory may depart from classical utilitarianism while still claiming to be consequentialist. It may agree that right acts are those that bring about good states of affairs, but not that better states of affairs are simply those containing greater quantities of happiness. It may concentrate not on the actual consequences of an act, but rather on the act’s expected or foreseeable consequences, or on the consequences of adopting the rule or policy under which the act falls. Or, it may concern itself not with acts, primarily, but rather with character traits, motives, persons, or something else, assessing these entities by assessing their consequences.

The more attention is paid to theories that differ from classical utilitarianism in such respects, the harder it is to draw a sharp division between consequentialist and non-consequentialist theory. Still, there is a distinctive consequentialist temperament, inherited from classical utilitarianism. A consequentialist begins with a view about how the world ought to be, and evaluates entities (whether acts, rules or something else) by asking how well they do at getting us there.

Australasia is more closely associated with consequentialism than with any other moral theory, and these days consequentialism is more closely associated with Australasia than with any other part of the world. Two of the most prominent recent proponents of classical utilitarianism, J. J. C. Smart and Peter Singer, are Australians, and Australasian philosophy has been at the forefront of efforts to develop consequentialism in its non-utilitarian forms—and also of efforts to refute it.

Classical utilitarianism has considerable appeal within the context of the respect for science and reductionist inclinations found among many Australasian philosophers. The theory is simple and elegant, its implications for particular cases are relatively clear, and it explains a wide variety of moral phenomena in terms of a single core value: the maximisation of happiness.

The theory also has straightforwardly moral attractions, which are the focus of the positive arguments in Smart’s seminal defence of utilitarianism (1973). Utilitarianism says that everybody, for the purposes of morality, counts equally, and it captures the credible thought that the whole point of morality is to make individuals better off. Also, as Smart is quick to point out, utilitarianism avoids the rule-worship or fetishism apparent in many of its competitors. Non-utilitarian moral theories are likely to imply that, sometimes, morality requires us to turn down opportunities to reduce misery or make individuals happier. But does a theory not look like a fixation, the utilitarian can ask, when it tells us not to put the best interests of individuals first?

Smart’s essay also faces up to utilitarianism’s central theoretical chore: the effort to stave off counterexamples. If it is a virtue of utilitarianism that its implications for particular cases are clear, its major weakness is that those implications are often counter-intuitive. To mention some of the many famous examples: utilitarianism implies that you should kill an innocent person for her organs, if you can use the organs to save the lives of several others; it implies that a pacifist ought to take a job manufacturing weapons, if by doing the job badly she can prevent others from doing it well; and it implies that it is a good thing to take pleasure in the suffering of others, if they are going to suffer anyway. The underlying problem for utilitarianism is that it grants no fundamental importance to certain moral concepts—respect, rights, virtue, integrity, autonomy—that play significant roles in ordinary moral thinking.

Smart gives a clear and honest expression of the standard utilitarian strategy for responding to such cases. First, utilitarians deconstruct the examples, trying to show that they are remote and unlikely, and that the utilitarian verdicts are hence not quite so unpalatable. Second, utilitarians downplay the importance of moral intuitions, suggesting that the opinions of the ‘man on the street’ should not be accepted as authoritative ethical data. Third, utilitarians style themselves as reformers, setting out to challenge and improve upon commonsense morality, not simply to reify it. Even the most committed utilitarians will admit, however, that their anti-utilitarian intuitions remain extremely difficult to shake.

Australasian consequentialists after Smart do not tend to focus directly on defending the utilitarian criterion of right action. Instead, they tend to pursue either the project of applying the broadly utilitarian perspective to questions in practical ethics, or the project of developing versions of non-utilitarian consequentialism.

The outstanding representative of the first project is Peter Singer. Without relying on utilitarian premises, Singer offers strong arguments for his claims that we are morally obliged actively to prevent significant harm, not just to refrain from causing it (Singer 1972); that there is no morally significant difference between the suffering of humans and the suffering of animals (Singer 1975); and that the value of a human life consists not in its sacredness or sanctity, but in its quality (Singer 1994a). These are all distinctive implications of utilitarianism, and the doctrine is strengthened, and certain objections withstood, when they are shown to be plausible in their own right.

When it comes to the second project—building non-utilitarian forms of consequentialism—much of the work carried out by Australasian philosophers is motivated by the worry that consequentialism, in its traditional forms, is too demanding. Consequentialism requires us to sacrifice anything of ours that can be used more efficiently elsewhere; we should not spend money on holidays, clothes or theater tickets, for example, if that money could do more good in the hands of those worse off than ourselves. A version of this worry that has received particular attention among Australasian philosophers is the ‘nearest and dearest’ objection. Consequentialism, says the objection, asks us to treat everyone equally, and hence tells us that we are not permitted to favor other people in the ways that genuine love and friendship demand. How can I be a true friend, the objection asks, if I value all others only for their contributions to the general happiness, and if I am prepared to abandon my friends whenever I notice that I can produce more happiness elsewhere? (The most influential statement of this objection is Stocker 1976, written while Stocker was working in Australia, and one of the most sophisticated is presented by the Australians Dean Cocking and Justin Oakley (1995).)

The New Zealander Tim Mulgan argues that consequentialism can deal with the demandingness objection, but only if it offers different consequentialist accounts of different parts of morality. Mulgan (2001) presents ‘Combined Consequentialism’, which applies simple act consequentialism to the ‘realm of necessity’—the realm within which we deal with the needs of everyone—but a form of rule consequentialism to the ‘realm of reciprocity’—the realm within which we deal with the goals of those within our delimited moral community. (Mulgan’s treatment of the realm of reciprocity has some similarities to the view developed by the Australian Liam Murphy (2000).) Mulgan’s latest work extends his framework to cover questions about individuals in yet a different moral realm: those yet to be born (Mulgan 2006).

Consideration of the ‘nearest and dearest’ objection has inspired several other suggested modifications to classical utilitarianism. Much of the relevant work has been done at the Australian National University. Robert E. Goodin (1995) suggests that we restrict the scope of utilitarianism, applying it to public life but not to personal relationships. Philip Pettit (1997b) shows how a consequentialist can be a pluralist about value, seeking to advance not only individual happiness, but also such values as peace, truth-telling, respect for property, wisdom and—presumably—love and friendship. Pettit’s view remains a form of consequentialism, because it instructs us to promote these values, even at the cost of failing to instantiate them in our own lives; if going to war will lead to long-term peace, you should go to war, and if betraying a friend will produce more and better friendships elsewhere, you should betray the friend.

Pettit also points out that there can be consequentialist reasons to value certain kinds of non-consequentialist thinking. A world in which we think as friends is probably better, Pettit says, than one in which we always think as consequentialists. Frank Jackson (1991) pursues a similar thought, saying that often the way to achieve a collective goal is for each member of the collective to focus on one part of the goal. Perhaps, then, the most effective strategy for improving the lives of everyone is for each of us to concentrate upon those we know and love best. If these suggestions hold, then we may find reasons to approve of the non-consequentialist psychology of friendship, in ourselves or in others, from a consequentialist perspective.

Michael Smith (2009) argues that the ‘nearest and dearest’ objection still stands, and presses for a further theoretical move. He suggests that we adopt a relativistic account of value, so that different individuals have different states of affairs at which to aim. Smith’s approach judges each act on its consequences, but it abandons what has often been seen as consequentialism’s defining characteristic: the telling of a single, objective story about the goodness of states of affairs, standing prior to any consideration of particular agents and circumstances.

Australasian consequentialism begins with classical utilitarianism, but the further its development the less prominent its concern with the utilitarian theory of right action. It has two major ongoing contributions to offer. First, there is the development of distinctive and fruitful approaches to questions in practical ethics, influenced by a broadly utilitarian perspective. Second, there is the development of moral theories—call them ‘consequentialist’ or not—that stand as viable alternatives to deontology and virtue ethics. Classical utilitarianism may ultimately be impossible to defend, but its insights continue to yield important philosophical work, both practical and theoretical.

Cresswell, Maxwell J.

Edwin Mares

Maxwell John Cresswell was born on 19 November 1939 in Wellington, New Zealand. He went to Wellington College before attending Victoria College of the University of New Zealand, where he obtained a B.A. (1960) and an M.A. (1961). He then went to the University of Manchester on a Commonwealth scholarship, where he studied for a Ph.D. under A. N. Prior. He got his Ph.D. in 1963 and returned to Wellington to a lectureship at his alma mater, which had by then become Victoria University of Wellington. He was given a personal chair in 1974. He taught at Victoria until his retirement in 2000. He has taught part-time since at Texas A and M University and at the University of Auckland.

Cresswell is an extremely prolific writer. As of the writing of this article, Cresswell has written ten books (two of these are collections of his articles), and 128 articles or book chapters. His main fields of research are logic, philosophical semantics, and the history of philosophy (especially Greek philosophy, Locke, and Bradley). Here I will concentrate on Cresswell’s contributions to philosophical logic.

Modal Logic

Cresswell has made many important contributions to modal logic, but his greatest achievement is that, together with George Hughes, he wrote An Introduction to Modal Logic (1968). This book was the first modern introduction to modal logic. It is distinguished from its predecessors by including a lengthy treatment of possible world semantics and completeness proofs for the standard systems. It was extremely successful, and was by far the most widely taught and read textbook in the field for about two decades. Hughes and Cresswell published some supplementary material in their 1984 book, A Companion to Modal Logic, and produced an updated version in 1996: A New Introduction to Modal Logic.

Although much of Cresswell’s work in logic has been in producing semantics for natural languages (see below), the basis for all of his logic has been possible worlds semantics. He has defended the use of possible worlds in many places, and produced a constructive metaphysics of worlds in ‘The World is Everything that is the Case’ (1972).

Lambda-Categorial Grammar

One of Cresswell’s key contributions to semantics is his λ-categorial grammar (see Cresswell 1973). The project of λ-categorial grammar is to provide a simplification of Montague grammar. The idea behind Montague grammar and λ-categorial grammar is to provide a way of understanding natural languages so that the semantic interpretation of a sentence can be read directly from its syntax. The syntactic structures of λ-categorial grammar so-to-speak wear their semantics on their sleeves. They can act, then, as intermediaries between the surface grammatical sentences of natural languages like English and their semantic interpretations.

Consider, for example, the following sentence:

 

Bess barks.

 

In a hybrid of English and standard logical notation we represent this as

 

Barks(Bess)

 

This formal representation changes the word order of the original. But with λ-categorial grammar, we can capture the word order of the original sentence:

 

< Bess, < λ, x < Barks, x >>>.

 

The Greek letter λ is a variable binding operator. It abstracts from a sentence. If we take the open sentence < Barks, x >, we can produce an abstract < λ, x < Barks, x >>. This abstract is a predicate that is true or false of individuals. The ‘category’ of a first order predicate like < λ, x < Barks, x >> is < 0, 1 >. This means that it represents a function that takes individuals (entities of category 1) to propositions (entities of category 0). Names and quantifier expressions on Cresswell’s view are of the category < 0, < 0, 1 >>. This means that they represent functions that take properties to propositions. In < Bess, < λ, x < Barks, x >>>, Bess represents a function that takes a property to the proposition that a particular individual (i.e. Bess) has that property. Taking names to be higher order functions of this sort allows us to retain a natural word order in sentences of the λ-categorial language.

The λ-categorial representation of the sentence is its ‘deep structure’ or ‘logical form’. We can read off the semantic interpretation from a logical form in a way that we often cannot from a sentence in the surface grammar of English. Consider

 

Everyone loves someone.

 

We can assign two different logical forms to this sentence:

 

(1) < everyone, < λ, x, < someone, < λ, y, < loves, x, y >>>>>

(2) < someone, < λ, y < everyone, < λ, x < loves, x, y >>>>>

 

The difference between these forms can be understood in terms of the order in which they are evaluated (Cresswell 1973: 82).

To interpret form (1), we first look at the open sentence < loves, x, y >. The interpretation of this is a function from worlds to sets of ordered pairs of individuals ((i, j) is in the set at w if and only if i loves j at w). We then look at the open sentence < someone, < λ, y, < loves, x, y >>>. This sentence is true at a world w of an individual i if and only if there is some j such that (i, j) is in the interpretation of < loves, x, y > at w. And so < everyone, < λ, x, < someone, < λ, y, < loves, x, y >>>>> is true at w if and only if everyone at w is an i such that there is some j such that (i, j) is in the interpretation of < loves, x, y > at w.

To interpret form (2), we again first interpret < loves, x, y >. Then we look at < everyone, < λ, x, < loves, x, y >>>, which holds of an individual j at w if and only if everyone i in w is such that (i, j) is in the interpretation of < loves, x, y > at w. Hence < someone, < λ, y, < everyone, < λ, x, << loves, x, y >>>>> is true at w if and only if there is some j in the interpretation of < everyone, < λ, x, < loves, x, y >>> at w.

Thus we have a way of disambiguating sentences of English, while retaining a natural word order. Note that the word order in the logical forms is not always exactly the same as the word order in the surface grammar. Disambiguation forces these differences on us. But the word order is still rather natural. If one is asked to disambiguate ‘everyone loves someone’ in English, she will probably do something like pointing out the difference between ‘for each person there is some person that he or she loves’ and ‘there is a single person whom everyone loves’. These two statements are very close in word order to the two forms (1) and (2) given above.

The step-by-step method of interpretation that we went through also illustrates how λ-categorial grammar gives us a compositional semantics for natural languages.

Hyperintensions

In the mid 1970s, Cresswell distinguished between intensional and hyperintensional contexts (Cresswell 1975). The modal operator Image (‘it is necessary that’) is an intensional operator. Suppose that the formula Imageα is true at a possible world w. Suppose also that the equivalence α ≡ β is also true at w. We cannot thereby infer that Imageβ is also true. We cannot substitute for equivalent formulae into modal contexts—such contexts are intensional. But if we know that Image(α ≡ β), then (if our logic of modality is a standard (or ‘regular’) logic) we can infer that Imageβ. Consider on the other hand a belief context—‘Jeremy believes that p Image ¬ p’. Now, p Image ¬p is necessarily equivalent to every other tautology, but we cannot infer that Jeremy believes every tautology. Thus belief contexts are more opaque than modal contexts. Hence Cresswell calls such contexts ‘hyperintensional’.

An intension of a statement is a set of possible worlds, the worlds in which that statement is true. A hyperintension is a structured entity of sorts. In λ-categorial grammar, the following is a representation of ‘Jeremy believes that p Image ¬ p’:

 

< Jeremy, < λ, x, < believes, x, < that, < Image, < p, < ¬, p >>>>>>

 

The intension of < Image, < p, < ¬, p >>>> is the entire set of possible worlds. But the hyperintension of this sentence is a structured entity, which includes the instensions of disjunction, p, and negation within it. This structured entity is a structured meaning (see Cresswell 1985b). Different tautologies represent different structured meanings, even though all of them have the same intension (the entire set of worlds). In this framework, belief is in effect taken to be a relation between a person and a hyperintension. The role of ‘that’ in the above representation is not really as an operator on the sentence ‘p or not p’ but rather on the parts of the sentence (1985b: 30).

One interesting feature of Cresswell’s view is that the structure of the meaning that is to be taken to be the object of belief is not determined just by the surface grammar of the sentence used to report the belief. Rather, the exact structure may vary from case to case (see Cresswell and von Stechow 1982, and Cresswell 1985b).

Indexical Semantics

Cresswell’s indexical semantics generalises the notion of an index that has become familiar from modal and tense logic. In modal logic, we do not merely claim that a statement is true or false, but rather that it is true or false at a particular world. Similarly, in tense logic, a sentence is said to be true or false relative to a particular time. In some cases, a world or time needs to be supplied in order to interpret a sentence. Consider, for example, ‘it is now 3 o’clock’. This sentence cannot be interpreted unless the time at which it is uttered is known. Similarly, ‘actually, there are no unicorns’ cannot be interpreted unless it is known at which world it is uttered. The world and time of an utterance are called ‘indices’.

Consider the following statement (from Partee 1989; taken from Cresswell 1995: 5):

 

The enemy is well-supplied.

 

A ‘point of view’ is needed to interpret this sentence. Seen from the English point of view at Waterloo, the enemy is the French. Seen from the French point of view, the enemy is the English and Prussians. The phrase ‘enemy’ is what Cresswell calls a ‘relational noun’. It represents a two place relation (as in, e.g. ‘France is the enemy of England’), but it is used in this sentence as if it were a general noun. Its relational nature, however, indicates that there is a position in the sentence (rather like a free variable) that needs to be satisfied by some entity. And this entity needs to be supplied by context. Cresswell (1995) provides a formalisation using λ-categorial grammar of sentences that include relational nouns, and a theory of how contexts supply such entities.

Critical Philosophy Journal

Paul Crittenden

The announcement of the journal Critical Philosophy in 1983 began with the bold promise to foster philosophical work that was at once contemporary and historically informed:

 

The new journal will attempt to bring the history of philosophical thought to bear on contemporary social and cultural issues, and to encourage critical reflection, from a historically informed perspective, on the styles and forms of contemporary philosophical activity. (Quoted from Critical Philosophy, vol. I, no. 1, editorial introduction, p. 3)

 

The first issue appeared in 1984, and the journal ceased publication in 1988 after just four volumes. The journal was initiated by Genevieve Lloyd and Kim Lycos from the Australian National University, and Paul Crittenden (editor) and Stephen Gaukroger (Review Editor) from the Department of General Philosophy at the University of Sydney. Marion Tapper, University of Melbourne, joined the editorial committee in 1986. Each of the first two volumes contained two issues, consisting typically of four articles, one or two review articles, a dozen book reviews, and numerous informative booknotes. Volumes 3 and 4 consisted of double issues on nominated topics.

In the first two years, articles shaped by historical themes and perspectives had a prominent place: Genevieve Lloyd, ‘History of Philosophy and the Critique of Reason’; Patricia Springborg, ‘Marx, Democracy and the Ancient Polis’; György Markus, ‘Interpretations of, and Interpretation in Philosophy’; Andrzej Walicki, ‘On Writing Intellectual History: Leszek Kolakowski and the Warsaw School of the History of Ideas’; Paul Redding, ‘History and Hermeneutics: The ‘Ontological’ Critique of Historical Consciousness’; Richard Sylvan, ‘Science and Science: Relocating Stove and Modern Irrationalists’; Paul Thom, ‘Analytical Philosophy and the History of Logic’; Richard Campbell, ‘Doing Philosophy Historically’; and Marilyn Strathern, ‘John Locke’s Servant and the Hausboi from Hagen’. Articles on phenomenology, existentialism, and recent French philosophy (Foucault, Deleuze, Lacan) formed another significant group.

Volume 3, Philosophical Papers on Nuclear Armaments (1986), was published with the support of the Australasian Association of Philosophy. Motions condemning the production of nuclear weapons had been passed at the Annual General Meeting in 1983, and again in 1984, and an AAP-sponsored conference on ‘Philosophical Problems of Nuclear Armaments’ was held in Brisbane in 1985. The Critical Philosophy volume consists of a number of the Brisbane papers along with others submitted directly to the journal. With papers by Rodney Allen, Graham Nerlich, Brian Scarlett, Jim Thornton, Jocelyn Dunphy, Bill Ginnane, Graham Priest, Janna Thompson, Tony Coady, Brian Ellis, Brian Medlin, and Richard Sylvan, the volume stands as a valuable collection of essays on a topic that remains contemporary.

Volume 4, on Philosophy and Literature, includes articles by Bernard Harrison (on Muriel Spark and the Book of Job), Charles Pigden (on Dostoievski), Kevin Hart (on Heidegger and the essence of poetry), David Novitz (on the view that philosophy is no more than a variety of literature), Ray Walters (on Proust), Wal Suchting (on Jean Améry), Jindra Tichý (on Plato versus Sophocles), and Kim Lycos (‘Hecuba’s Newly-learned Melody: Nussbaum on Philosophy learning from Euripides’).

In the first issue in 1984, the editor noted that ‘the launching of a new journal is particularly hazardous in the current economic climate’. Critical Philosophy published a fair series of articles and reviews and had its share of supporters and critics; but funds were lacking for the journal to continue beyond 1988. Its appearance for several years might be seen as a sign of the times and a worthwhile, if minor, venture in philosophy in Australia in its time.

Critical Thinking

Sam Butchart

The modern concept of ‘critical thinking’ as a goal of education derives from the work of the American pragmatist philosopher and educational theorist, John Dewey (Dewey 1933). The central idea is that educators should teach students how to think well for themselves, rather than simply teaching ‘facts and figures’. Students should be able to critically assess claims, beliefs, policies and arguments that they encounter, not just in academic contexts but in everyday life, the workplace and social and political contexts. The idea assumes that there are some general purpose, more or less context independent thinking skills and dispositions that can be taught, or at least encouraged.

The aim of producing ‘critical thinkers’ has been adopted by many curriculum authorities and universities in Australasia. There is an explicit focus on the teaching of thinking skills in the school curriculum frameworks of New Zealand, Victoria, Tasmania, South Australia, Western Australia and the Northern Territory. At the tertiary level, a majority of universities now include critical thinking in their lists of graduate attributes—the essential skills they aim to instil in their students.

Critical Thinking and Informal Logic

A necessary component of the ability to think critically about a claim or policy is the ability to assess the evidence or reasons which might count for or against it. For that reason, the ability to analyse, construct and evaluate arguments is often considered to be a core component of critical thinking (though not the only component; critical thinking also clearly requires certain dispositional traits or ‘habits of mind’—to actively seek out evidence for and against one’s own views, for example).

The goal of teaching critical thinking therefore overlaps to a significant extent with the goals of the informal logic movement. ‘Informal logic’ is a term used to denote both a skill and an academic discipline. The skill is the ability to construct, analyse and evaluate real, natural language arguments—arguments as they are found in books, articles, newspapers, opinion pieces, essays, political speeches, public debates and so on. The academic discipline studies the theory of informal argument and how the skill can be taught and improved.

Informal logic began to gain ground in the 1970s and arose from a perception that the standard introductory undergraduate course in symbolic logic is of very limited use as a practical tool for evaluating real arguments. Students themselves began to demand courses that were more relevant and applicable to the assessment of arguments concerning the political and social issues of the day. New courses and textbooks began to appear, emphasising ‘real-life’ arguments and attempting to provide new tools for their analysis and evaluation. Early examples of the new style of textbook are Kahane’s Logic and Contemporary Rhetoric (1971), Johnson and Blair’s Logical Self-Defense (1977) and Michael Scriven’s Reasoning (1976).

These developments seem to have been rapidly adopted in Australian and New Zealand universities. J. L. Mackie, for example, was teaching a course in informal logic at the University of Sydney in the early 1960s, making use of the method of argument diagrams described below. As of this writing, undergraduate courses in critical thinking are taught in approximately 70% of philosophy departments in Australian universities and in every philosophy department in New Zealand. The teaching of elementary formal logic has not been abandoned—it is still considered by many to be an essential component of a philosophical education—but this is now supplemented by critical thinking courses which are often explicitly aimed at improving everyday, practical reasoning skills.

Although the main figures in the development of informal logic and critical thinking are based in the U.S. and Canada, philosophers in Australasia have made important contributions. In this article I will describe the contributions of three of the most influential: C. L. Hamblin, Michael Scriven, and Tim van Gelder.

C. L. Hamblin

Charles Leonard Hamblin (1922–1985) was professor of philosophy at the University of New South Wales from 1955 until his death in 1985. Apart from being one of Australia’s first computer scientists (inventing, in the 1950s, the push-pop stack and one of the first computer programming languages), Hamblin has been a major influence in the field of informal logic.

His book Fallacies (1970) anticipates many themes that were to emerge in informal logic. A fallacy is a common pattern or type of argument that, though often persuasive, is in fact unsound. Well known examples of fallacies include affirming the consequent, begging the question, argument ad hominem, illegitimate appeal to authority, and so on. The topic was introduced by Aristotle and has been extended throughout the centuries. Hamblin’s was the first ever book-length discussion of the fallacies and contains a scholarly and detailed history of the topic which remains unsurpassed to this day.

The book begins with some trenchant (and highly influential) criticisms of the then standard textbook discussions of fallacies, which are in Hamblin’s view ‘as debased, wornout and dogmatic a treatment as could be imagined—incredibly tradition-bound, yet lacking in logic and in historical sense alike, and almost without connection to anything else in modern logic’ (1970: 12). Many of Hamblin’s critiques have become widely accepted by the informal logic community. Hamblin noted that many of the fallacies are not obviously arguments at all; the fallacy of ‘appeal to force’, for example, or the fallacy of ‘many questions’, illustrated by questions such as ‘Have you stopped beating your wife?’. More significantly, many so-called fallacies are not always bad arguments. Appeals to authority and arguments ad hominem are sometimes quite legitimate. Standard treatments of the fallacies, Hamblin argued, either fail to notice this or provide no guidance on distinguishing the invalid cases from the valid.

Hamblin’s book is not merely critical, however. It also contains a substantial and influential positive component. Hamblin went on to argue that, despite the failings of the standard account, the topic of fallacies nonetheless fills an important gap left open by formal logic. Hamblin argued that what is required is an extended conception of argument, according to which ‘there are various criteria of worth of arguments; that they may conflict, and that arguments may conflict … All this sets the theory of arguments apart from Formal Logic and gives it an additional dimension’ (1970: 231). According to Hamblin, there are aspects of argument appraisal that go beyond the standard set by formal deductive logic. There may, for example, be good arguments both for and against a given conclusion. If so, then the ‘goodness’ of an argument cannot be simply identified with the standard of ‘soundness’—deductive validity combined with the truth of premises—for there cannot be two deductively sound arguments for a conclusion and its opposite. Here Hamblin anticipated a point that has now become widely accepted in the field of informal logic.

If formal deductive logic is not the whole story, what do we need to fill the gap? Hamblin argued for what he called a ‘dialectical’ conception of argument. The essential point is to recognise that arguments typically take place in the context of a dialogue (real or imagined) between two or more people, usually with differing views on the matter in question. This has several implications for informal logic. For example, the idea that in a good argument the premises must be true should be abandoned. One reason is that truth is not sufficient; a person who argues from premises which are not known to be true, but are only true ‘by accident’ has not given a good argument. But requiring premises to be known to be true is too strong. Instead, Hamblin argued that the right criterion is dialectical; a good argument requires premises that are accepted by the parties to the dialogue. These themes and arguments have emerged again and again in the subsequent history of informal logic and critical thinking.

Hamblin further developed his dialectical account of argument, introducing the idea of formal dialectic. He devised a variety of formal ‘dialogue-games’ to model argumentative dialogues. In Hamblin’s games, players take turns to make moves such as making a statement, putting forward an argument, asking a question, or asking the other player to support one of their statements with an argument. Players build up a store of commitments with each move they make. Rules determine what moves are allowed and how the commitment store changes with each move. In recent years, these dialogue games have found applications in linguistics, computer science and artificial intelligence, in areas such as natural language processing and communication protocols for autonomous software agents.

Michael Scriven

Michael Scriven (b. 1928) was educated at the University of Melbourne, attaining a B.A. in mathematics in 1948 and an M.A. in philosophy in 1950. While at the University of Melbourne, he (like many others) was influenced in his thinking about formal and informal logic by D. A. T. Gasking, who taught there from 1946 until 1976. Scriven completed a D.Phil. in Oxford in 1956 and joined the philosophy department at the University of California, Berkeley in 1966, where he began teaching courses in Practical Logic and Practical Ethics. From 1982 to 1989 he was professor of education at the University of Western Australia, and he was professor of evaluation at the University of Auckland in New Zealand from 2001 to 2004.

Scriven’s critical thinking textbook Reasoning (1976) is a classic and has served as a model for many of the textbooks that followed. Scriven eschews the use of formal logic entirely and proposes a seven-step procedure for understanding and evaluating real arguments: (1) clarification of meaning; (2) identification of conclusions; (3) portrayal of structure; (4) formulation of unstated assumptions; (5) criticism of premises and inferences; (6) introduction of other relevant arguments, and (7) overall evaluation. With variations, this framework of analysis (steps 1–4) followed by evaluation (steps 5–7) has been reproduced in many textbooks.

There are several innovative features of Reasoning worth mentioning here. First, Scriven’s was one of the first textbooks to make use of argument map diagrams. These diagrams are used to portray the global structure of an argument—the way in which premises, intermediate conclusions and the final conclusion all fit together. In these diagrams, each statement in the argument is written out and numbered, then arrows connecting the statements are drawn to represent the inferential relationships between them.

Scriven was not the first to make use of such diagrams—they appear, for example, in Beardsley (1950) and Toulmin (1958)—but he introduced some useful innovations, which are still often used today. He suggested writing out and numbering each statement of the argument in a seperate ‘dictionary’, then using just the numbers in the argument map diagram. Unstated premises (assumptions) are labelled with a letter rather than a number, to distinguish them more clearly from the explicit premises of the argument. Scriven also introduced the idea of incorporating into the argument map statements that count against the conclusion, by labelling the numbers in the argument map with a ‘+’ or ‘-’ sign.

A second important feature of Scriven’s text is the extensive discussion of the process of formulating unstated premises or assumptions. Scriven adopts three criteria for inclusion of an unstated assumption in the analysis of an argument. The assumption must be (1) strong enough to make the argument sound, (2) no stronger than it needs to be to make the argument sound, and (3) have at least some relation to what the arguer would be likely to know or would believe to be true. With regard to (2) and (3) Scriven was one of the first authors to make explicit the role of the principle of charity in the identification of assumptions in arguments.

Since the publication of Reasoning, Scriven has continued to be involved with the theory and practice of critical thinking. He has published several articles on a variety of topics in the field and (with Alec Fisher) a book on methods for evaluating critical thinking skills in students (Fisher and Scriven 1997). Scriven’s interest in critical thinking led to an interest in the more general concept of evaluation—the process of coming to a reasoned conclusion about the merit or worth of something (products, processes, services, government and non-government programs, and so on). He has helped to found the field of Evaluation as a flourishing discipline in its own right, with its own academic journals and professional organisations (Scriven 1991).

Tim van Gelder

Tim van Gelder (b. 1962) was educated at Geelong Grammar, the University of Melbourne (B.A. 1984), where he studied law, mathematics and philosophy, and the University of Pittsburgh (Ph.D. 1989). He taught at the philosophy department of Indiana University until 1993, when he returned to Australia as an Australian Research Council QEII Research Fellow. From 1998 to 2005 he was principal fellow in the department of philosophy at the University of Melbourne, while working primarily for the private firm Austhink, which he co-founded in 2000.

Van Gelder’s early work in philosophy focussed on issues in the foundations of cognitive science. When he turned his attention to critical thinking, he proposed the Quality Practice Hypothesis as a model of how critical thinking skills might be improved (van Gelder et al. 2004, 2005). This hypothesis states that critical thinking skills can only be improved by extensive deliberate practice—a concept based on research in cognitive science on how expertise is acquired in a variety of cognitive domains. Deliberate practice must be motivated (the student should be deliberately practicing in order to improve their skills), guided (the student should have access to help about what to do next), scaffolded (in the early stages it should be impossible for the student to make certain kinds of mistake), and graduated (exercises gradually increase in difficulty and complexity). In addition, for practice to be effective, sufficient feedback must be provided.

With this in mind, van Gelder and colleagues at the University of Melbourne developed a critical thinking course based around computer assisted argument mapping exercises. Students are provided with exercises in which they have to create an argument map diagram, like those described above. Van Gelder’s team developed computer software to assist students in the creation of these argument maps. Text can be typed into boxes and edited, supporting premises can be added, deleted or moved around. Evaluations of the premises and inferences can also be incorporated into the diagram.

This argument mapping software (now known as Rationale and commercially available from the Austhink organisation) is used to support extensive deliberate practice in applying argument analysis skills. Students are provided with a sequence of exercises of increasing difficulty in which they have to create a map of an argument. The software itself provides some of the scaffolding and guidance by building certain constraints into the kind of diagram that can be produced (for example, every argument must have one and only one main conclusion) and offering context-sensitive help. Feedback is supplied by tutors and model answers—pre-prepared argument maps to which students can compare their work (van Gelder 2001).

Using this approach to teaching critical thinking, van Gelder and others at the University of Melbourne have achieved impressive results. Over a single semester, twelve-week course, they have consistently recorded significant improvements in critical thinking, as measured by a standardised multiple-choice test, the California Critical Thinking Skills Test (van Gelder et al. 2004). The software is now used for teaching critical thinking in dozens of universities and hundreds of schools in Australia and world-wide. In 2001, van Gelder was awarded the Australian Museum Eureka Prize for Critical Thinking, in recognition of this work.

Critical Thinking in Schools in Australasia

Moves to incorporate the explicit teaching of critical thinking skills in primary and secondary schools in Australia and New Zealand are fairly recent. Where this has been done, the work of Edward de Bono has been very influential. De Bono’s ‘CoRT’ (‘Cognitive Research Trust’) program of thinking lessons and his ‘Six Thinking Hats’ scheme are widely used (de Bono 1985, 1987).

Many academic philosophers in Australasia have been involved in the teaching of critical thinking in schools through the ‘Philosophy in Schools’ program (Lipman 1987). In this approach, pupils are provided with a stimulus, such as a story, situation, film, television show or newspaper or magazine article. The stimulus is used to introduce a philosophical question (for example: What makes something fair or unfair? If you’re not good at something, does that mean you’re bad?) The teacher then guides pupils through a classroom discussion of the issues, encouraging them to provide reasons for opinions, and to distinguish good reasons from bad ones. The aim is to encourage and model a spirit of intellectual curiosity and fair-mindedness, and to inculcate the idea that opinions can and should be backed up by sound arguments.

Many schools in Australasia use this approach to teaching critical thinking. There are now Associations for Philosophy in Schools in most states in Australia, New Zealand and in Singapore. The Federation of Australasian Philosophy in Schools Associations (FAPSA) is a not-for-profit umbrella organisation, set up to promote philosophy teaching in schools and provide resources and training for teachers. It organises conferences and publishes the journal Critical and Creative Thinking.

A Companion to Philosophy in Australia and New Zealand

   by Graham Oppy, N. N. Trakakis