January 20, 2010

-page 29-

It is not hard to see why psychoanalysis should be viewed about cause and meaning. On the one hand, Freud’s theories introduce a panoply of concepts that appear to characterize mental processes as mechanical and non-meaningful. Included are Freud’s neurological model of the mind, as outlined in his ‘Project or a Scientific Psychology’, more broadly, his ‘economic’ description of the mental, as having properties of force or energy, e.g., as ‘cathexing’ objects: And his account in the mechanism of repression. So it seems that psychoanalytic explanation employs terms of logical variances with those of ordinary, common-sens e psychology, where mechanisms do not play a central role. On the other hand, and equally striking, there is the fact that psychoanalysis proceeds through interpretation and engages on a relentless search for meaningful connections in mental life ~ something accorded the orderly categories that make of a like manner, a superficial examination of the ‘Interpretation of Dreams’, or ‘The Psychopathology of Everyday Life’, cannot fail to impress upon one. Psychoanalytic interpretation adduces meaningful connections between disparate and often apparently dissociated mental and behavioural phenomena, directed by the goal of ‘thematic coherence’. Of giving mental life the sort of unity that we find in a work of art or cogent narrative. In this respect, psychoanalysis seems to the interestedness for which state or form of adoption would prove crucial to its central containing qualities, whereby on condition or agreement with which the most salient feature of ordinary psychology is its insistence on relating interactions to reason. For through contentful characterizations of each that make their inter-connectivity intentfully purposive as of something that extends beyond a level or normal outer surroundings, in the projective realization that its prominence of analysis advances all partiality into a self realization and the unifying rationality in the whole, is felt blissfully of intelligibility, or goal that seems remote from anything found in the physical sciences.


The application to psychoanalysis of the perspective afforded by the cause-meaning debate can also be seen as a natural consequence of another factor, namely the semi-paradoxical nature of psychoanalysis’ explananda. With respect to all irrational phenomena, something like a paradox arises. Irrationality involves a failure of a rational connectedness and hence of meaningfulness, and so, if it is to have an explanation of any kind, relations that are non-meaningful are causally to be needed. And, yet, as observed above, it seems that, in offering explanations for irrationality ~ plugging the ‘gaps’ in consciousness ~ what psychoanalytic explanations are shackled on or upon the treadmills of time, for the postulations of further, although non-apparent connections of meaning.

For these two reasons, then ~ the logical heterogeneity of its explanation and the ambiguous status of its explananda ~ it may seem that an examination as for the concepts of cause and meaning will provide the key to a philosophical elucidation of psychoanalysis. The possible views of psychoanalytic explanation that may result from such an examination can be arranged along two dimensions. Psychoanalytic explanation may then be viewed after reconstruction, as either causal and non-meaningful, or meaningful and non-causal, or as comprising both meaningful and causal elements, in various combinations. Psychoanalytic explanation then may be viewed, on each of these reconstructions, as either licensed or invalidated depending one’s view of the logical nature of psychology.

So, for instance, some philosophical discussion infers that psychoanalytic explanation is void, simple since it is committed to causality in psychology. On another, opposed view, it is the virtue of psychoanalytic explanation that it imputes causal relations, since only causal relations can be about explaining the failures of meaningful psychological connections. On yet another view, it is psychoanalysis’ commitment to meaning which is its great fault: It s held that the stories that psychoanalysis tries to tell do not really, on examination, explains successfully. And so on.

It is fair to say that the debates between these various positions fail to establish anything definite about psychoanalytic explanation. There are two reasons for this. First, there are several different strands in Freud’s whitings, each of which may be drawn on, apparently conclusively, in support of each alternative reconstruction. Secondly, preoccupation with a wholly general problem in the philosophy of mind, that of cause and meaning, distracts attention from the distinguishing features of psychoanalytic explanation. At this point, and to prepare the way for a plausible reconstruction of psychoanalytic explanation. It is appropriate to take a step back, and take a fresh look at the cause-meaning issue in the philosophy of psychoanalysis.

Suppose, first, that some sort of cause-meaning compatibilism ~ such as that of the American philosopher Donald Davidson (1917-2003) -, hold for ordinary psychology, on this view, psychological explanation requires some sort of parallelism of causal and meaningful connections, grounded in the idea that psychological properties play causal roles determined by their content. Nothing in psychoanalytic explanation is inconsistent with this picture: After his abandonment of the early ‘Project’. Freud exceptionlessly viewed psychology as autonomous relative to neurophysiology, and just when congruent with a broadly naturalistic world-view. ‘Naturalism’ is often used interchangeably with ‘physicalism’ and ‘materialism’, though each of these hints at specific doctrines. Thus, ‘physicalism’ suggests that, among the natural sciences, there be something especially fundamental about physics. And ‘materialism’ has connotations going back to eighteenth-and-nineteenth-century views of the world as essentially made of material particles whose behaviour is fundamental for explaining everything else. Moreover, ‘naturalism’ with respect to some realm is the view that everything that exists in that realm, and all those events that take place in it, is empirically accessible features of the world. Sometimes naturalism is taken that some realm can be in principle understood by appeal to the laws and theories of the natural sciences, but one must be careful as sine naturalism does not by itself imply anything about reduction. Historically, ‘natural’ contrasts with ‘supernatural’, but in the context of contemporary philosophy of mind where debate involves the possibility of explaining mental phenomena as part of the natural order, it is the non-natural rather than the supernatural that is the contrasting notion. The naturalist holds that they can be so explained, while the opponent of naturalism thinks otherwise, though it is not intended that opposition to naturalism commits one to anything supernatural. Nonetheless, one should not take naturalism in regard as committing one to any sort of reductive explanation of that realm, and there are such commitments in the use of ‘physicalism’ and ‘materialism’.

If psychoanalytic explanation gives the impression that it imputes bare, meaning-free causality, this results from attending to only half the story, and misunderstanding what psychoanalysis means when it talks of psychological mechanisms. The economic descriptions of mental processes that psychoanalysis provides are never replacements for, but they always presuppose, characterizations of mental processes of meaning. Mechanisms in psychoanalytic context are simply processes whose operation cannot be reconstructed as instances of rational functioning (they are what we might by preference call mental activities, by contrast with action) Psychoanalytic explanation’s postulation of mechanisms should not therefore be regarded as a regrettable and expugnable incursion of scientism into Freud’s thought, as is often claimed.

Suppose, alternatively, that Hermeneuticists such as Habermas ~ who follow Dilthey beings as an interpretative practice to which the concepts of the physical sciences, are given ~ by which of a corrective in thinking that connections of meaning are misrepresented through being described as causal? Again, this does not negate its value to the psychoanalytic explanation since, as just argued, psychoanalytic explanation nowhere imputes some otherwise meaning-free causation. Nothing is lost for psychoanalytic explanation in causation is excised from the psychological picture.

The conclusion must be that psychoanalytic explanation is at bottom indifferent to the general meaning-cause issue. The core of psychoanalysis consists in its tracing of meaningful connections with no greater or lesser commitment to causality than is involved in ordinary psychology. (Which helps to set the stage ~ pending appropriate clinical validation ~ for psychoanalysis to claim as much truth for its explanation as ordinary psychology?). Also, the true key to psychoanalytic explanation, its attribution of special kinds of mental states, not yet to have been acknowledged by any forming contingent of ordinary psychology, whose relations to one another does not have the form of patterns of inference or practical reasoning.

In the light of this, it is easy to understand why some compatibilities and Hermeneuticists assert that their own view of psychology is uniquely consistent with psychoanalytic explanation. Compatibilities are right to think that, to provide for psychoanalytic explanation, it is necessary to allow mental connections that are unlike the connections of reasons to the actions that they rationalize, or to the beliefs that they support: And, that, in outlining such connections, psychoanalytic explanation must outstrip the resources of ordinary psychology, which does attempt to force as much as possible into the mould of practical reasoning. Hermeneuticists, for their part, are right to think that it would be futile to postulate connections that were nominally psychological but that characterized as to meaning, and that psychoanalytic explanation does not respond to the ‘paradox’ of irrationality by abandoning the search for meaningful connections.

Compatibilities are, however, wrong to think that non-rational but meaningful connections require the psychological order to be conceived as a causal order. The Hermeneuticists are free to postulate psychological connections that are determined by meaning but not by rationality: It is coherent to suppose that there are connections of meaning that are not -bona fide- rational connections, without these being causal. Meaningfulness is a broader concept than rationality. (Sometimes this thought has been expressed, though not helpful, by saying that Freud discovered the existence of ‘neurotic rationality.) Despite the fact that an assumption of rationality is doubtless necessary to make sense of behaviour overall. It does not need to be brought into play in making sense of each instance of behaviour. Hermeneuticists, in turn, are wrong to think that the compatibility view psychology as causal signals a confusion of meaning with causality or that it must lead to compatibilism to deny that there is any qualitative difference between rational and irrational psychological connections.

Even so, the last two decades have been an intermittent time of intervals through which times’ extraordinary changes, placing an encouraging well-situated plot in the psychology of the sciences. ‘Cognitive psychology’, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level processing, had become ~ as, perhaps -, the most dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour.

The relationships between physical behaviour and agential behaviour are controversial. On some views, all ‘actions’ are identical; to physical changes in the subjects body, however, some kinds of physical behaviour, such as ‘reflexes’, are uncontroversially not kinds of agential behaviour. On others, a subject’s actions regulate the omissions in the resultant amount of physical change, but it is not identical to it.

Both physical and agential behaviours could be understood in the widest sense. Anything a person can do ~ even calculating in his head, for instance ~ could be regarded as agential behaviour. Likewise, any physical change in a person’s body ~ even the firing of a certain neuron, for instance ~ could be regarded as physical behaviour.

Of course, to claim that the mind is ‘nothing over and above’ such-and-such kinds of behaviour, construed as either physical or agential behaviour in the widest sense, is not necessarily to be a behaviourist. The theory that the mind is a series of volitional acts ~ a view close to the idealist position of George Berkeley (1685-1753) ~ and the theories that the mind is a certain configuration of neuronal events, while both controversial, are not forms of behaviourism.

Awaiting, right along side of an approaching account for which anomalous monism may take on or upon itself is the view that there is only one kind of substance underlying all others, changing and processes. It is generally used in contrast to ‘dualism’, though one can also think of it as denying what might be called ‘pluralism’ ~ a view often associated with Aristotle which claims that there are a number of substances, as the corpses of times generations have let it be known. Against the background of modern science, monism is usually understood to be a form of ‘materialism’ or ‘physicalism’. That is, the fundamental properties of matter and energy as described by physics are counted the only properties there are.

The position in the philosophy of mind known as ‘anomalous monism’ has its historical origins in the German philosopher and founder of critical philosophy is Immanuel Kant (1724-1804), but is universally identified with the American philosopher Herbert Donald Davidson (1917-2003), and it was he who coined the term. Davidson has maintained that one can be a monist ~ indeed, a physicalist ~ about the fundamental nature of things and events, while also asserting that there can be no full ‘reduction’ of the mental to the physical. (This is sometimes expressed by saying that there can be an ontological, though not a conceptual reduction.) Davidson thinks that complete knowledge of the brain and any related neurophysiological systems that support the mind’s activities would not themselves be knowledge of such things as belief, desire, experience and the rest of mentalistic generativist of thoughts. This is not because he thinks that the mind is somehow a separate kind of existence: Anomalous monism is after all monism. Rather, it is because the nature of mental phenomena rules out a priori that there will be law-like regularities connecting mental phenomena and physical events in the brain, and, without such laws, there is no real hope of explaining the mental via the physical structure of the brain.

All and all, one central goal of the philosophy of science is to provided explicit and systematic accounts of the theories and explanatory strategies explored in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central sort were internal physical structure concepts involved in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and thereby has been a great deal of work on the structure of evolutionary theory and on such crucial concepts. If concepts of the simple (observational) sorts were internal physical structures that had, in this sense, an information-carrying function, a function they acquired during learning, then instances of these structure types would have a content that (like a belief) could be either true or false. In that of information-carrying structure carries all kinds of information if, for example, it carries information ‘A’, it must also carry the information that ‘A’ or ‘B’. Conceivably, the process of learning is supposed to be a process in which a single piece of this information is selected for special treatment, thereby of becoming the semantic content ~ the meaning ~ of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their flashing lights, and so forth ~ representations of the conditions in the world in which we are interested, so learning converts neural states that carry information ~ ‘pointer readings’ in the head, so to speak ~ in structures that have the function of providing some vital piece of information they carry when this process occurs in the ordinary course of learning, the functions in question develop naturally. They do not, as do the functions of instruments and artefacts, depends on the intentions, beliefs, and attitudes of users. We do not give brain structure these functions. They get it by themselves, in some natural way, either (in the case of the senses) from their selectional history or (in the case of thought) from individual learning. The result is a network of internal representations that have (in different ways) the power representation, of experience and belief.

To understand that this approach to ‘thought’ and ‘belief’, the approach that conceives of them as forms of internal representation, is not a version of ‘functionalism’ ~ at least, not if this dely held theory is understood, as it is often, as a theory that identifies mental properties with functional properties. For which functional properties have to suit the purpose with which the way something is, in fact, that behaves, with its syndrome of typical causes and effects. An informational model of belief, in order to account for misrepresentation, a problem with which a preliminary way that in both need something more than a structure that provided information. It needs something having that as its function. It needs something that is supposed to provide information. As Sober (1985) comments for an account of the mind we need functionalism with the function, the ‘teleological’, is put back in it.

The philosopher necessarily need’s not (and typically does not) assume that there is anything wrong with the science they are studying. Their goal is simply to provide accountable theories, concepts and explanatory strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

Cognitive psychology is in many ways a curious and puzzling science. Many of the theories put forward by cognitive psychologists make use of a family of ‘intentional’ concepts ~ like believing that ‘, desiring that ‘q’, and representing ‘r’ ~ that do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

It is characteristic of dialectic awareness that discussions of intentionality appeared as the paradigm cases discussed which are usually beliefs or sometimes beliefs and desires, however, the biologically most basic forms of intentionality are in perception and in intentional action. These also have certain formal features that are not common to beliefs and desire. Consider a case of perceptual experience. Suppose that I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfactions that there are a hand in front of my face. Thus far, the condition of satisfaction is the same as the belief than there is a hand in front of my face. But with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first that there is a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction forms a part. We can represent this in our acceptation of the form, such as:

Visual experience (that there is a hand in front of face

and the fact that there is a hand in front of my face

Is causing this very experience.)

Furthermore, visual experiences have a kind of conscious immediacy not characterised of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are themselves forms of consciousness.

People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, sensational, are said to result in mental states that represent (or sometimes misrepresent) one or as another aspect of the cognitive agent’s environment. Other theorists have offered analogous acts, if differing in detail, perhaps, the most crucial idea in all of this is the one about representations. There is perhaps a sense in which what happens at, says, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what produces that stimulation, and thus, some kind of representation of the objects of perception. So it may seem, if one attempts to describe the relation between the structure and characteristic of the object of perception and the structure and nature of the retinal processes. One might say that the nature of that relation is such as to provide information about the part of the world perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s truck provide information of its age. This is because there is an appropriate causal relation between the things that make it impossible for it to be a matter of chance. Subsequently processing can then be thought to be one carried out on what is provided in the representations in question.

However, if there are such representations, they are not representations for the perceiver, it is the thought that perception involves representations of that kind that produced the old, and now largely discredited philosophical theories of perception that suggested that perception be a matter, primarily, of an apprehension of mental states of some kind, e.g., sense-data, which are representatives of perceptual objects, either by being caused by them or in being in some way constitutive of them. Also, if it is said that the idea of information so invoked indicates that there is a sense in which the cognitive processes of stimulation can be said to have content, but a non-conceptual content, distinct from the content provided by the subsumption of what is perceived under concepts. It must be emphasized that, that the limit in requirements is not interconnectivity of the perceiver. What the information-processing story has allotted the provisions, for wherefores, at best, a more adequate categorization than previously available of the causal processes involved. That may be important, but more should not be claimed for it than there is. If in perception is a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is because there is presupposed in that perception the possession of concepts of objects, and more particular, a concept of space and how objects occupy space.

It is, nonetheless, that cognitive psychologists occasionally say a bit about the nature of intentional concepts and the nature of intentional concepts and the explanations that exploit them. Their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile grounds for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Alan Jerry Fodor’s (1935- ), in his ‘The Language of Thought’ (1975) was a pioneering study in the genre on the field. Philosophers have, also, done important and widely discussed work in what might be called the ‘descriptive philosophy’ or ‘cognitive psychology’.

These philosophical accounts of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, are inevitably smoothed over the top of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists actually produce, then the philosophers have just got it wrong. There is, however, a very different way in which philosopher’s have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two situated considerations are those that fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps e easiest way to make the point about ‘supervenience is to use a thought experiment of the sort originally proposed by the American philosopher Hilary Putnam (1926- ). Suppose that in some distant corner of the universe there is a planet, Twin Earth, which is very similar to our own planet. On Twin Earth, there is a person who is an atom for atomic replication of me, yet I live on Earth and believe that by some measure I was born in Ontario. Yet had you asked him, Was I really born in Ontario Canada. In all probability the answer would either or not be yes or no, as a twin, Richard would respond in the same way, but it is not because I believe that my birth rights were attributed by me. As, perhaps, my beliefs are very much in question of what is true or false about Richard? The apparent attributions of my beliefs are about Twin-me, and that the Twin-Richard is certainly not that I was born in Toronto, and thus, that my own self is believed as true while Twin-Richard is false. What all this is supposed to show, is that two people, perhaps on opposite polarities of justice, or justice as drawn on or upon human rights, can share all their physiological properties without sharing all their intentional properties. To turn this into a problem for cognitive psychology, two additional premises are needed. The first is that cognitive psychology attempts to explain behaviour by appeal to people’s intentional properties. The second, is that psychological explanations should not appeal to properties that fall to supervene on an organism’s physiology. (Variations on this theme can be found in the American philosopher Allen Jerry Fodor (1987)).

The thesis that the mental are supervenient on the physical ~ roughly, the claim that the mental character is altogether and completely determinant rendering adaptation of its physical nature ~ has played a key role in the formulation of some influential positions of the ‘mind-body’ problem. In particular versions of non-reductive ‘physicalism’, and has evoked in arguments about the mental, and has been used to devise solutions to some central problems about the mind ~ for example, the problem of mental causation.

The idea of supervenience applies to one but not to the other, that this, there could be no difference in a moral respect without a difference in some descriptive, or non-moral respect evidently, the idea generalized so as to apply to any two sets of properties (to secure greater generality it is more convenient to speak of properties that predicates). The American philosopher Donald Herbert Davidson (1970), was perhaps first to introduce supervenience into the rhetoric discharging into discussions of the mind-body problem, when he wrote ‘ . . . mental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respectfulness, or that an object cannot alter in some metal deferential submission without altering in some physical regard. Following, the British philosopher George Edward Moore (1873-1958) and the English moral philosopher Richard Mervyn Hare (1919-2003), from whom he avowedly borrowed the idea of supervenience. Donald Herbert Davidson, went on to assert that supervenience in this sense is consistent with the irreducibility of the spervient to their ‘subvenient’, or ‘base’ properties. Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ‘

Thus, three ideas have purposively come to be closely associated with supervenience: (1) Property convariation, (if two things are indiscernible in based property’s they must be indiscernible in supervenient properties). (2) Dependence, (supervenient properties are dependent on, or determined by, their subservient bases) and (3) non-reducibility (property convariation and dependence involved in supervenience can obtain even if supervenient properties are not reducible to their base properties.)

Nonetheless, in at least, for the moment, supervenience of the mental ~ in the form of strong supervenience, or, at least global supervenience ~ is arguably a minimum commitment to physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation ~ that is, as a solution to the mind-body problem?

It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence in either way. However, if we take to consider the ethical naturalist intuitivistic will say that the supervenience, and the dependence, for which is a brute fact you discern through moral intuition: And the prescriptivist will attribute the supervenience to some form of consistency requirements on the language of evaluation and prescription. And distinct from all of these is Mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its pats. What all this shows, is that there is no single type of dependence relation common to all cases of supervenience, supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, Mereological dependence, and so forth.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is that to explicate mind-body supervenience as a special case of Mereological supervenience ~ that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is meta-physically sui generis and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituent organs, tissues, and so forth, are organized and function. This more specific supervenience thesis may be a serious theory of the mind-body relation that can compete for the classic options in the field.

On this topic, as with many topics in philosophy, there is a distinction to be made between (1) certain vague, partially inchoate, pre-theoretic ideas and beliefs about the matter at hand, and (2) certain more precise, more explicit, doctrines or theses that are taken to articulate or explicate those pre-theoretic ideas and beliefs. There are various potential ways of precisifying our pre-theoretic conception of a physicalist or materialist account of mentality, and the question of how best to do so is itself a matter for ongoing, dialectic, philosophical inquiry.

The view concerns, in the first instance, at least, the question of how we, as ordinary human beings, in fact go about ascribing beliefs to one another. The idea is that we do this on the basis of our knowledge of a common-sense theory of psychology. The theory is not held to consist in a collection of grandmotherly saying, such as ‘once bitten, twice shy’. Rather it consists in a body of generalizations relating psychological states to each other to put in from the environment, and to actions. Such may be founded on or upon the grounds that show or include the following:

(1) (x)(p)(if x fears that p, then x desires that not-p.)

(2) (x)(p)(if x hopes that p and [✸] hopes that p and

[✸] discovers that p, then [✸] is pleased that p.)

(3) (x)(p)(q) (If x believes that p and [✸] believes that

if p, then q, barring confusion, distraction and so

forth [✸] believes that q.

(4) (x)(p)(q) (If x desires that p and x believes that if q then

p, and x are able to bring it about that q, then, barring

conflict desires or preferred strategies, x brings it about

that q.)

All of these generalizations should be understood as containing ceteris paribus clauses. (1), for example, applies most of the time, but variably. Adventurous types often enjoy the adrenal thrill produced by fear. This leads them, on occasion, to desire the very state of affairs that frightens them. Analogously, with (3). A subject who believes that ‘p’ nd believes that if ‘p’, then ‘q’. Would typically infer that ‘q?’. But certain atypical circumstances may intervene: Subjects may become confused or distracted, or they may find the prospect of ‘q’ so awful that they dare not allow themselves to believe it. The ceteris paribus nature of these generalizations is not usually considered to be problematic, since atypical circumstances are, of course, atypical, and the generalizations are applicable most of the time.

We apply this psychological theory to make inference about people’s beliefs, desires and so forth. If, for example, we know that Julie believes that if she is to be at the airport at four, then she should get a taxi at half past two, and she believes that she is to be at the airport at four, then we will predict, using (3), that Julie will infer that she should get a taxi at half past two.

The Theory-Theory, as it is called, is an empirical theory addressing the question of our actual knowledge of beliefs. Taken in its purest form if addressed both first and third-person knowledge: We know about our own beliefs and those of others in the same way, by application of common-sense psychological theory in both cases. However, it is not very plausible to hold that we always ~ or, indeed usually ~ knowing our own beliefs by way of theoretical inference. Since it is an empirical theory concerning one of our cognitive abilities, the Theory-Theory is open to psychological scrutiny. Various issues of the hypothesized common-sense psychological theory, we need to know whether it is known consciously or unconsciously. Nevertheless, research has revealed that three-year-old children are reasonably gods at inferring the beliefs of others on the basis of actions, and at predicting actions on the basis of beliefs that others are known to possess. However, there is one area in which three-year-old’s psychological reasoning differs markedly from adults. Tests of the sorts are rationalized in such that: ‘False Belief Tests’, reveal largely consistent results. Three-year-old subjects are witnesses to the scenario about the child, Billy, see his mother place some biscuits in a biscuit tin. Billy then goes out to play, and, unseen by him, his mother removes the biscuit from the tin and places them in a jar, which is then hidden in a cupboard. When asked, ‘Where will Billy look for the biscuits’? The majority of three-year-olds answer that Billy will look in the jar in the cupboard ~ where the biscuits actually are, than where Billy saw them being placed. On being asked ‘Where does Billy think the biscuits are’? They again, tend to answer ‘in the cupboard’, rather than ‘in the jar’. Three-year-olds thus, appear to have some difficulty attributing false beliefs to others in case in which it would be natural for adults to do so. However, it appears that three-year-olds are lacking the idea of false beliefs in general, nor does it become an attentive self-experiential conflict with which of attributing false beliefs in other estranging situations. For example, they have little trouble distinguishing between dreams and play, on the one hand, and true beliefs or claims on the other. By the age of four and some half years, most children pass the False Belief Tests fairly consistently. There is yet no general accepted theory of why three-year-olds fare so badly with the false beliefs tests, nor of what it reveals about their conception of beliefs.

Recently some philosophers and psychologists have put forward what they take to be an alternative to the Theory-Theory: However, the challenge does not end there. We need also to consider the vital element of making appropriate adjustments for differences between one’s own psychological states and those of the other. Nevertheless, it is implausible to think in every such case of simulation, yet alone will provide the resolving obtainability to achieve.

The evaluation of the behavioural manifestations of belief, desires, and intentions are enormously varied, every bit as suggested. When we move away from perceptual beliefs, the links with behaviour are intractable and indirect: The expectation in form on the basis of a particular belief reflects the influence of numerous other opinions, my actions are formed by the totality of my preferences and all those opinions that have a bearing on or upon them. The causal processes that produce my beliefs reflect my opinions about those processes, about their reliability and the interference to which they are subject. Thus, behaviour justifies the ascription of a particular belief only by helping to warrant a more evincing interpretation of the overall cognitive position of the individual in question. Psychological descriptions, like translation, are a ‘holistic’ business. And once this is taken into account, it is all the less likely that a common physical trait will be found which grounds all instances of the same belief. The ways in which all of our propositional altitudes interact in the production of behaviour reinforce the anomalous character of the mental and render any sort of reduction of the mental to the physical impossibilities. Such is not meant as a practical procedure, it can, however, generalize on this so that interpretation and merely translation is at issue, has made this notion central to methods of accounting responsibilities of mind.

Theory and Theory-Theory are two, as many think competing, views of the nature of our common-sense, propositional attitude explanations of action. For example, when we say that our neighbour cut down his apple tree because he believed that it was ruining his patio and did not want it ruined, we are offering a typically common-sense explanation of his action in terms of his beliefs and desires. But, even though wholly familiar, it is not clear what kind of explanation is at issue. Connected of one view, is the attribution of beliefs and desires that are taken as the application to actions of a theory that, in its informal way, functions very much like theoretical explanations in science. This is known as the ‘theory-theory’ of every day psychological explanation. In contrast, it has been argued that our propositional attributes are not theoretical claims do much as reports of a kind of ‘simulation’. On such a ‘simulation theory’ of the matter, we decide what our neighbour will do (and thereby why he did so) by imagining himself in his position and deciding what we would do.

The Simulation Theorist should probably concede that simulations need to be backed up by the independent means of discovering the psychological states of others. But they need not concede that these independent means take the form of a theory. Rather, they might suggest that we can get by with some rules of thumb, or straightforward inductive reasoning of a general kind.

A second and related difficulty with the Simulation Theory concerns our capacity to attribute beliefs that are too alien to be easily simulated: Beliefs of small children, or psychotics, or bizarre beliefs that are deeply held to consciousable suppressions by some unknowing and underlying unconscious latencies. The small child refuses to sleep in the dark: He is afraid that the Wicked Witch of the North will steal him away. No matter how many adjustments we make, it may be hard for mature adults to get their own psychological processes, exacting in the like manner as well as pretended play, to mimic the production of such belief. For the Theory-Theory alien beliefs are not particularly problematic: So long as they fit into the basic generalizations of the theory, they will be inferrable from the evidence. Thus, the Theory-Theory can account better for our ability to discover more bizarre and alien beliefs than can the Simulation Theory.

The Theory-Theory and the Simulation Theory are not the only proposals about knowledge of belief. A third view has its origins in the Austrian philosopher Ludwig Wittgenstein (1889-1951). On this view both the Theory and Simulation Theories attribute too much psychologizing to our common-sense psychology. Knowledge of other minds is, according to this alternative picture, more observational in nature. Beliefs, desires, feelings are made manifest to us in the speech and other actions of those with whom we share a language and way of life. When someone says. ‘Its going to rain’ and takes his umbrella from his bag. It is immediately clear to us that he believes it is going to rain. In bring order into knowing this prediction is according too not one or the other of two that we neither indulge in speculative theorizing nor take upon any given possibility to simulate: We responsively perceive and become as the perceivers. Of course, this is not straightforward visual perception of the sort that we use to see the umbrella. But it is like visual perception in that it provides immediate and non-inferential awareness of its objects. We might call this the ‘Observational Theory’.

The Observational Theory does not seem to accord very well with the fact that we frequently do have to indulge in a fair amount of psychologizing to find in what others believe. It is clear that any given action might be the upshot of any number of different psychological attitudes. This applies even in the simplest cases. For example, because one’s friend is suspended from a hydrogen-balloon near a beehive, with the intention of stealing honey. This idea to make the bees behave that it is going to rain and therefore believe that the balloon as a hydrogen cloud, and therefore pay no attention to it, and so fail to notice one’s dangling friend. Given this sort of possibility, the observer would surely be rash immediately to judge that the agent believes that it is going to rain. Rather, they would need to determine ~ perhaps, by theory, perhaps by simulation ~ that of the various clusters of mental states that might have led to the action, actually did so. This would involve bringing in further knowledge of the agent, the background circumstances and so forth. It is hard to see how the sort of complication and complex mental processes involve this sort of psychological reflection, as this could be assimilated to any kind of observation.

The attributions of intentionality that depend on optimality or rationality are interpretations of the assumptive phenomena ~ a ‘heuristic overlay’ (1969), describing an inescapable idealized ‘real pattern’. Like such abstractions, as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have noo independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if ~ most importantly ~ rival intentional interpretations arose that did equally well at rationalizing the history of behaviour was an entity. Orman van William Quine (1908-2000), the most influential American philosopher of the latter half of the 20th century, whose thesis on the indeterminacy of radical translation carries all the way in the thesis of the indeterminacy of radical interpretation of mental states and processes.

The fact that cases of radical indeterminacy, though possible in principle, is vanishingly unlikely ever to comfort us in small solacing refuge and shelter, apparently this idea is deeply counter-intuitive to many philosophers, who have hankered for more ‘realistic’ doctrines. There are two different strands of ‘realism’ that in the attempt to undermine are such:

(1) Realism about the entities purportedly described by four

Every day, mentalistic discourse ~ what I dubbed as

Folk-psychology, such as beliefs, desires, pains, the self.

(2) Realism about content itself ~ the ideas that there have

to be events or entities that really have intentionality

(as opposed to the events and entities that only have as

If they had intentionality).

The tenet indicated by (1) rests of what is fatigue, what bodily states or events are so fatiguing, that they are identical with, and so forth. This is a confusion that calls for diplomacy, not philosophical discovery: The choice between an ‘Eliminative materialism’ and an ‘identity theory’ of fatigues is not a matter of which ‘ism’ is right, but of which way of speaking is most apt to wean these misbegotten features of them as conceptual schemata.

Again, the tenet (2) my attack has been more indirect. The view that some philosophers, in that of a demand for content realism as an instance of a common philosophical mistake: Philosophers often manoeuvre themselves into a position from which they can see only two alternatives: Infinite regresses versus some sort of ‘intrinsic’ foundation ~ a prime mover of one sort or another. For instance, it has seemed obvious that for some things to be valuable as means, other things must be intrinsically valuable ~ ends in themselves ~ otherwise we would be stuck with a vicious type of regression (or, having no beginning or end) of things valuable only that although some intentionality is ‘derived’ (the ‘aboutness’ of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is), unless some intentionality is ‘original’ and underived, there could be no derived intentionality.

There is always another alternative, namely, the affinities regress that decease limitations out without marked foundations or thresholds or essences. Here is an avoided paradox: Every mammal has a mammal for a mother ~ but, this implies an infinite genealogy of mammals, which cannot be the case. The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today’s mammals is secure without foundations.

The best instance of tis theme is held to the idea that the way to explain the miraculous-seeming powers of an intelligent intentional system is to decompose it into hierarchically structured teams of ever more stupid intentional systems, ultimately discharging all intelligence-debts in a fabric of stupid mechanisms. Lycan (1981), has called this view ‘homuncular functionalism’. One may be tempted to ask: Are the subpersonal components ‘real’ intentional systems? At what point in the diminutions of prowess as we descend to simple neurons does ‘real’ intentionality disappear? Don’t ask. The reasons for regarding an individual neuron (or a thermostat) as a intentional system are unimpressive, but zero, and the security of our intentional attributions at the highest lowest-level of real intentionality. Another exploitation of the same idea is found in Elbow Room (1984): At what point in evolutionary histories gives considerateness that attention portrayal something for real reason-appreciators of its real selves, make their appearance? Don’t ask for any applicable reason. Here is yet another, more fundamental versions of evolution can point in the early days of evolution can we speak of genuine function, genuine selection-for and not mere inadvertent preservation of entities that happen to have some self-replicative capacity? Don’t ask. Many of the more interesting and important features of our world have emerged, gradually, from a world that initially lacked them ~ function, intentionality, consciousness, morality, value ~ and it is a fool’s errand to try to identify a first or a mere instance, except for history’s most slowly unwinding unintended reductio ad absurdum. Mostly, the disagreements explored in that literature cannot even be given an initial expression unless one takes on the assumption of an unsounded fundamentality of strong realism about content, and its constant companion, the idea of a ‘language of thought’ a system of mental representation that is decomposable into elements rather like terms, and large elements rather like sentences. The illusion, that this is plausible, or even inevitable, is particularly fostered by the philosophers’ normal tactic of working from examples of ‘believing-that-p’ that focuses attention on mental states that are directly or indirectly language-infected, such as believing that the shortest spy is a spy, or believing that snow is white. (Do polar bears believe that snow is white? In the way we do?) There are such states for which employ in language-using human beings ~ but, they are not exemplary foundational states of belief, needing a term for them. As, perhaps, in calling the term in need of, as they represent ‘opinions’. Opinions play a large, as, perhaps becoming more evident as a decisive character-role in our concept of a person, but they are not paradigms of the sort of cognitive element to which one can assign content in the first instance. If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected state is more readily seen to be derived, less directly implicated in the explanation of behaviour, and the chief but illicit source of plausibility of the doctrine of a language of thought. Postulating a language of thought is in any event a postponement of the central problems of content ascribed, not a necessary first step.

Our impulse, no matter if, that it forces to move out the causal theories of epistemology, of what makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what causes or circumstantial emersion the subject had to acquire the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. For some proposed casual criteria for knowledge and justification are for us, to take under consideration.

Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right sort of causal connection to the fact that ‘p’. Such, are the criteria for which it can be applied only to cases where the fact that ‘p’, a sort that can enter causal relations: This seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization. And proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.

For example, the forthright Australian materialist David Malet Armstrong (1973), proposed that a belief of the form, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictate that, for any subject ‘x’ and perceived object ‘y’. If ‘x’ has those properties and believes that ‘y’ is ‘F’, then ‘y’ is ‘F’. Dretske (1981) offers a rather similar account in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that any tinted colour in things that look brownishly-tinted to you and brownishly-tinted things look of any tinted colour. If you fail to heed these results you have for thinking that your colour perception is awry and believe of a thing that look’s colour tinted to you that it is colour tinted, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as to be a completely reliable sign (or to carry the information) that the thing is tinted or found of some tinted discolouration.

One could fend off this sort of counter-example by simply adding to the causal condition the requirement that the belief be justified. But this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people (but not in you, as it happens) causes the aforementioned aberration in colour perception. The experimenter tells you that you are taken such a drug that says, ‘No, wait a minute, the pill you took was just a placebo’. But suppose further that this last thing the experimenter told you is false. Her telling you this gives you justification for believing of a thing that looks colour tinted or tinged in brownish tones, but in fact about this justification that is unknown to you (that the experimenter’s last statement was false) makes it the casse that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.

Goldman (1986) has proposed an important different sort of causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that a ‘global’ and ‘locally’ reliable. It is global reliability of its propensity to cause true beliefs is sufficiently high. Local reliability had to do with whether the process would have produced a similar but false belief in certain counter-factual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge e does not require the fact believed to be causally related to the belief and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires, also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counter-factual situation in which it is.

The theory of relevant alternative is best understood as an attempt to accommodate two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, this means that the justification or evidence one must have an order to know a proposition ‘p’ must be sufficient to eliminate all the alternatives to ‘p’ (when an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’).

For knowledge requires only that elimination of the relevant alternatives. So the relevant alternatives view preservers both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The relevant alternative for accounting that knowledge can be motivated by noting that other concepts exhibit the same logical structure. Two examples of this are the concepts ‘flat’ and the concept ‘empty’. Both appear to be absolute concepts ~ a space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of flat, there is a standard for what there is a standard for what counts as a bump and in the case of empty, there is a standard for what counts as a thing. We would not deny that a table is flat because a microscope reveals irregularities in its surface. Nor would we deny that a warehouse is empty because it contains particles of dust. To be flat is to be free of any relevant bumps. To be empty is to be devoid of all relevant things. Analogously, the relevant alternative’s theory says that to know a proposition is to have evidence that eliminates all relevant alternatives.

Some philosophers have argued that the relevant alternative’s theory of knowledge entails the falsity of the principle that set of known (by S) propositions in closed under known (by S) entailment, although others have disputed this however, this principle affirms the following conditional or the closure principle:

If ‘S’ knows ‘p’ and ‘S’ knows that ‘p’ entails ‘q’, then ‘S’ knows ‘q’.

According to the theory of relevant alternatives, we can know a proposition ‘p’, without knowing that some (non-relevant) alterative to ‘p’ is false. But, once an alternative ‘h’ to ‘p’ incompatible with ‘p’, then ‘p’ will trivially entail not-h. So it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that ‘we see a cleverly disguised mule’ is not a relevant alterative). This will involve a violation of the ‘closure principle’. This is an interesting consequence of the theory because the closure principle seems too many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premise that we do not know that the alternatives raised by the sceptic are false. From these two premisses, it follows (on the assumption that we see that the propositions we believe entail the falsity of sceptical alternatives) that we do not know the proposition we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternative’s theory as replying to the sceptical arguments by denying the closure principle.

What makes an alternative relevant? What standard do the alternate inclining inclination’s raises by the sceptic fail to meet? These notoriously difficult to answer with any degree of precision or generality. This difficulty has led critics to view the theory as something being to obscurity. The problem can be illustrated though an example. Suppose Smith sees a barn and believes that he does, on the basis of very good perceptual evidence. When is the alternative that Smith sees a paper-maché replica relevant? If there are many such replicas in the immediate area, then this alternative can be relevant. In these circumstances, Smith fails to know that he sees a barn unless he knows that it is not the case that he sees a barn replica. Where no such replicas exist, this alternative will not be relevant. Smith can know that he sees a barn without knowing that he does not see a barn replica.

This suggests that a criterion of relevance be something like probability conditional on Smith’s evidence and certain features of the circumstances. But which circumstances in particular do we count? Consider a case where we want the result that the barn replica alternative is clearly relevant, e.g., a case where the circumstances are such that there are numerous barn replicas in the area. Does the suggested criterion give us the result we wanted? The probability that Smith sees a barn replica given his evidence and his location to an area where there are many barn replicas is high. However, that same probability conditional on his evidence and his particular visual orientation toward a real barn is quite low. We want the probability to be conditional on features of the circumstances like the former but not on features of the circumstances like the latter. But how do we capture the difference in a general formulation?

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe theory. If the theory is supposed to provide us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitutes a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, it can be argued that the difficulty has little significance for the overall success of the theory.

What justifies the acceptance of a theory? Although particular versions of empiricism have met many criticisms, its untroubling power to attract looks to encourage for an answer in some sort of empiricist terms: In terms, that is, of support by the available evidence. How else could objectivity of science be defended but by showing that its conclusions (and in particular its theoretical conclusion’s ~ those theories it presently accepts) are somehow legitimately based on agreed observational and experimental evidence? But, as is well known, theories in general pose a problem for empiricism.

Allowing the empiricist the assumptions that there are observational statements whose truth-values can be inter-subjectively agreeing, and show the exploratory, non-demonstrative use of experiment in contemporary science. Yet philosophers identify experiments with observed results, and these with the testing of theory. They assume that observation provides an open window for the mind onto a world of natural facts and regularities, and that the main problem for the scientist is to establish the unique or the independence of a theoretical interpretation. Experiments merely enable the production of (true) observation statements. Shared, replicable observations are the basis for scientific consensus about an objective reality. It is clear that most scientific claims are genuinely theoretical: Nether themselves observational nor derivable deductively from observation statements (nor from inductive generalizations thereof). Accepting that there are phenomena that we have more or less diet access to, then, theories seem, at least when taken literally, to tell us about what is going on ‘underneath’ the observable, directly accessible phenomena on order to produce those phenomena. The accounts given by such theories of this trans-empirical reality, simply because it is trans-empirical, can never be established by data, nor even by the ‘natural’ inductive generalizations of our data. No amount of evidence about tracks in cloud chambers and the like, can deductively establish that those tracks are produced by ‘trans-observational’ electrons.

One response would, of course, be to invoke some strict empiricist account of meaning, insisting that talk of electrons and the like, is, in fact just shorthand for talks in cloud chambers and the like. This account, however, has few, if any, current defenders. But, if so, the empiricist must acknowledge that, if we take any presently accepted theory, then there must be alternatives, different theories (indefinitely many of them) which treat the evidence equally well ~ assuming that the only evidential criterion is the entailment of the correct observational results.

All the same, there is an easy general result as well: assuming that a theory is any deductively closed set of sentences, and assuming, with the empiricist that the language in which these sentences are expressed has two sorts of predicated (observational and theoretical), and, finally, assuming that the entailment of the evidence is only constraint on empirical adequacy, then there are always indefinitely many different theories that are equally empirically adequate in a language in which the two sets of predicates are differentiated. Consider the restriction if ‘T’ to quantifier-free sentences expression is purely in the observational vocabulary, then any conservative extension of that restricted set of T’s consequences back into the full vocabulary is a ‘theory’ co-empirically adequate with ~ entailing the same singular observational statements as ~ ‘T’. Unless veery special conditions apply (conditions that do not apply to any real scientific theory), then some of the empirically equivalent theories will formally contradict ‘T’. (A similar straightforward demonstration works for the currently more fashionable account of theories as sets of models.)

How can an empiricist, who rejects the claim that two empirically equivalent theories are thereby fully equivalent, explain why the particular theory ‘T’ that is, as a matter of fact, accepted in science is preferred these other possible theories ‘T’s’, with the same observational content? Obviously the answer must be ‘by bringing in further criteria beyond that of simply having the right observational consequence. Simplicity, coherence with other accepted these and unity are favourite contenders. There are notorious problems in formulating these criteria may that be precise: But suppose, for present purposes, having or manifesting the strength or powers as for inacting or resisting of some strengthful able-bodied intuivistic grasp to operate usefully with them. What is the status of such further criteria?

The empiricist-instrumentalist position, newly adopted and sharply argued by van Fraassen, is that those further criteria are ‘pragmatic’ ~ that is, involved essential reference to ‘us’ as ‘theory-users’. We happen tp prefers, for our own purposes, since, coherent, unified theories ~ but this is only a reflection of our preferences. It would be a mistake to think of those features supplying extra reasons to believe in the truth (or, approximate truth) of the theory that has them. Van Fraassen’s account differs from some standard instrumentalist-empiricist account in recognizing the extra content of a theory (beyond its directly observational content) as genuinely declarative, as consisting of true-or-false assertions about the hidden structure of the world. His account accepts that the extra content can neither be eliminated as a result of defining theoretical notions in observational terms, nor be properly regarded as only apparently declarative but in fact as simply a codification schema. For van Fraassen, if a theory says that there are electrons, then the theory should be taken as meaning what it says ~ and this without any positivist divide debasing reinterpretations of the meaning that might make ‘There are electrons’ mere shorthand for some complicated set of statements about tracks in obscure chambers or the like.

In the case of contradictory but empirically equivalent theories, such as the theory T1 that ‘there are electrons’ and the theory T2 that ‘all the observable phenomena as if there are electrons but there is not ‘t’. Van Fraassen’s account entails that each has a truth-value, at most one of which is ‘true’, is that science need not to T2, but this need not mean that it is rational belief, that it is more likely to be true (or otherwise appropriately connected with nature). As far as belief in the theory is belief but T2. The only belief involved in the acceptance of a theory is belief in the theorist’s empirical adequacy. To accept the quantum theory, for example, entails believing that it ‘saves the phenomena’ ~ all the (relevant) phenomena, but only the phenomena, theorists do ‘say more’ than can be checked empirically even in principle. What more they say may indeed be true, but acceptance of the theory does not involve belief in the truth of the ‘more’ that theorist say.

Preferences between theories that are empirically equivalent are accounted for, because acceptance involves more than belief: As well as this epistemic dimension, acceptance also has a pragmatic dimension. Simplicity, (relative) freedom from ads hoc assumptions, ‘unity’, and the like are genuine virtues that can supply good reasons to accept one theory than another, but they are pragmatic virtues, reflecting the way we happen to like to do science, rather than anything about the world. Simplicity to think that they do so: The rationality of science and of scientific practices can be in truth (or approximate truth) of accepted theories. Van Fraassen’s account conflicts with what many others see as very strong intuitions.

The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemologically justified for a given person to be cognitively accessible to that person, internal to his cognitive perceptive, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, so that they can be external to the believer’s cognitive perspective, beyond his knowing. However, epistemologists often use the distinction between internalist and externalist theories of epistemic explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and a rather different way to accounts of belief and thought content. The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately. But without the need for any change of position, new information, and so forth. Though the phrase ‘cognitively accessible’ suggests the weak interpretation, therein intuitive motivation for intentionalism: The idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, wherefore, it would require the strong interpretation.

Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherentist’ view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally are internal mental states of the person in question. Not necessarily, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (a strong version) or even possible (weak versions) objects of objective awareness. Also, on this way of drawing the distinction, a hybrid view (like the ones already of mention), according to which some of the factors required for justification must be cognitively accessible while others ought not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least is capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirements for justification are roughly that the belief is produce d in a way or via a process that make it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have or likely to be true, but will, on such an account, nonetheless, be epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemological working within this tradition is likely to feel that the externalist, than offering a competing account on the same concept of epistemic justification with which the traditional epistemologists are concerned, has simply changed the subject.

Two general lines of argument are commonly advanced in favour of justificatory externalism. The first starts from the allegedly common-sensical premise that knowledge can be non-problematically ascribed to relativity unsophisticated adults, to young children and even to higher animals. It is then argued that such ascriptions would be untenable on the standard internalist accounts of epistemic justification (assuming that epistemic justification is a necessary condition for knowledge), since the beliefs and inferences involved in such accounts are too complicated and sophisticated to be plausibly ascribed to such subjects. Thus, only an externalist view can make sense of such common-sense ascriptions and this, on the presumption that common-sense is correct, constitutes a strong argument in favour of externalism. An internalist may respond by externalism. An internalist may respond by challenging the initial premise, arguing that such ascriptions of knowledge are exaggerated, while perhaps at the same time claiming that the cognitive situation of at least some of the subjects in question, is less restricted than the argument claims? A quite different response would be to reject the assumption that epistemic justification is a necessary condition for knowledge, perhaps, by adopting an externalist account of knowledge, rather than justification, as those aforementioned.

The second general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible, non-sceptical solutions to the classical problems of epistemology. In striking contrast, however, such problems are in general easily solvable on an externalist view. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is likely to be remedied in the future, we have good reason to think that some externalist view is true. Obviously the cogency of this argument depends on the plausibility of the two assumptions just noted. An internalist can reply, first, that it is not obvious that internalist epistemology is doomed to failure, that the explanation for the present lack of success may be the extreme difficulty of the problems in question. Secondly, it can be argued that most of even all of the appeal of the assumption that the various forms of scepticism are false depends essentially on the intuitive conviction that we do have reasons our grasp for thinking that the various beliefs questioned by the sceptic is true ~ a conviction that the proponent of this argument must have a course reject.

The main objection to externalism rests on the intuition that the basic requirements for epistemic justification are that the acceptance of the belief in question is rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true or at the very least, that such a reason be available to him. Since, the satisfaction of externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason. It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-example to externalism. The first of these challenges the necessity justification by appealing to examples of belief that seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples of this sort are cases where beliefs produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable on that of someone whose beliefs are produced more normally. Cases of this general sort can be constructed in which any of the standard externalist condition, e.g., that the beliefs are a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, inasmuch as one whose belief is produced in a more normal way, and hence that externalist accounts of justification must be mistaken.

Perhaps the most interesting reply to this sort of counter-example, on behalf of reliabilism specifically, holds that reliability of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-scenically believed to be, rather than in the world that actually contains the belief being judged. Since the cognitive processes employed in the Cartesian demon case are, we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious further issue is whether or not there is an adequate rationale for this construal of reliabilism, so that the reply is not merely ad hoc.

The second, correlative way of elaborating the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. Here the most widely discussed examples have to do with possible occult cognitive capacities like clairvoyance. Considering the point in application once again to reliabilism specifically, the claim is that a reliable clairvoyant who has no reason to think that he has such a cognitive power, and perhaps even good reasons to the contrary, is not rational or responsible and hence, not epistemologically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.

One sort of response to this latter sort of objection is to ‘bite the bullet’ and insist that such believe is in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A widely adopted response attempts to impose additional conditions, usually of more or less internalist sorts, which will rule out the offending example while still stopping far short of a full internalist. But while there is little doubt that such modified versions of externalism can indeed handle particular cases well enough to avoid clear intuitive implausibility, the issue is whether there will always be equally problematic cases that cannot handle, and whether there is any clear motivation for the additional requirements other than the general internalist view of justification that externalists are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism, holding that epistemic justification requires that there be a justificatory facto r that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, this further fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the two premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, while the second can be (and will normally be) purely external. Here the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational responsible way that justification intuitively seems required, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one that may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view obviously has to reject the justified true belief account of knowledge, holding instead that knowledge is true belief that satisfies the chosen externalist condition, e.g., is a result of a reliable process (and, perhaps, further conditions as well). This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept is epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the common-sen conviction that animals, young children and unsophisticated adult’s posse’s knowledge, though not the weaker conviction (if such a conviction even exists) that such individuals are epistemically justified in their belief. It is also, least of mention, less vulnerable to internalist counter-example of the sort and since the intuitions involved there pertains more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification rather than knowledge?

A rather different use of the terms ‘internalism’ and ‘externalism’ have to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors. Here too, a view that appeals to both internal and external elements are standardly classified as an externalist view.

As with justification and knowledge, the traditional view of content has been strongly internalist character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena concerning natural kind terms, indexical, and so forth, that motivates the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. ~ not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent of external factors about the environment, then knowledge of content should depend on knowledge of these factors ~ that will not in general is available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist must insist that there are no justifiable relations of these sorts, that only internally accessible content can either be justified or justify anything else: By such a response appears lame unless it is coupled with an attempt to shows that the externalists account of content is mistaken.

To have a word or a picture, or any other object in one’s mind seems to be one thing, but to understand it is quite another. A major target of the later Ludwig Wittgenstein (1889-1951) is the suggestion that this understanding is achieved by a further presence, so that words might be understood if they are accompanied by ideas, for example. Wittgenstein insists that the extra presence merely raise the same kind of problem again. The better of suggestions in that understanding is to be thought of as possession of a technique, or skill, and this is the point of the slogan that ‘meaning is use’, the idea is congenital to ‘pragmatism’ and hostile to ineffable and incommunicable understandings.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of commonisation and the relationship between words and ideas, sand words and the world.

The most influential idea, e.g., the theory of meaning in the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-condition. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by the German mathematician and philosopher of mathematics Gottlob Frége (1848-1925), then was developed in a distinctive way by the early Wittgenstein, and is as leading idea of the American philosopher Donald Herbert Davidson, (1917-2003). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conceptions of meaning as truth-conditions need not and should not be advanced for being in themselves a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally acted by the various types of sentences in the language, and must have some ideate significance of speech act, the claim of the theorist of truth-conditions should rather be targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. It is this claim and its attendant problems, which will be the concern of each in the following:

The meaning of a complex expression is a function of the meaning of its constituents. This is indeed just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a function and meaning of its constituents. On the truth-conditional conception, to give the meaning of sn expressions is the contribution it makes to the truth-conditions of sentence in which it occurs. For example terms ~ proper names, indexical, and certain pronouns ~ this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it true. The meaning of a sentence-forming operator as given by stating its contribution to the truth-conditions of a complex sentence, as function of the semantic values of the sentence on which it operates. An extremely simple, but nevertheless structured language, as can be stated that contribution’s various expressions make to truth condition, are such as:

A1: The referent of ‘London ‘ is London.

A2: The referent of ‘Paris’ is Paris

A3: Any sentence of the form ‘is beautiful’ is true if and only if the referent of ‘a’ is beautiful.

A4: Any sentence of the form ‘a being lager than b’ is true if and only if the referent of ‘a’ is larger than referent of ‘b’.

A5: Any sentence of t he for m ‘its no t the case that ‘A’ is true if and Only if it is not the case that ‘A’ is true.

A6: Any sentence of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true.

The principle’s maintained of A1-A6, forms a simple theory of truth for a fragment of English. In this the or it is possible to derive these consequences: That ‘Paris is beautiful’ is true if and only if Paris is beautiful, is true and only if Paris is beautiful (from A2 and A3): That ‘London is larger than Paris and it is not the case that London is beautiful, is true if and only if London is larger than Paris and it is not the case that London is beautiful (from A1-A5), and in general, for any sentence ‘A’, this simple language we can derive something of the form ‘A’ is true if and only if ‘A’.

Yet, theorists of truth conditions should insist that not every statement be true about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language. The axiom:‘London’ refers to the city in which there was a huge fire in 1666.

This is a true statement about the reference of ‘London’. It is a consequence of a theory that substitutes the axiom for A1 in our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the naming that in ‘London’, without knowing that the last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way that does not presuppose any prior, truth-conditional conception of meaning.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental, first, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is fir a person’s language too truly describable by a semantic theory containing a given semantic axiom.

What can take the charge of triviality first? In more detail, it would run thus: Since the content of a claim that the sentence ‘Paris is beautiful’ are true amounts to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions. But this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charging of tests upon what has been called the ‘redundancy theory of truth’, the theory also known as ‘Minimalism’. Or the ‘deflationary’ view of truth, fathered by the German mathematician and philosopher of mathematics, had begun with Gottlob Frége (1848-1925), and the Cambridge mathematician and philosopher Plumton Frank Ramsey (1903-30). Wherefore, the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, nit centres on the points that ‘it is true that p’ says no more nor less than ‘p’(hence redundancy): That in less direct context, such as ‘everything he said was true’. Or ‘all logical consequences are true’. The predicate functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said or the kind’s of propositions that follow from true propositions. For example: ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive users of the notion, such as ‘science aims at the truth’ or ‘truth is a normative governing discourse’. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objectivity’ conception of truth. But, perhaps, we can have the norm even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that ‘p’, then ‘p’, discourse is to be regulated by the principle that it is wrong to assert ‘p’ when not-p.

It is, nonetheless, that we can take charge of triviality, since the content of a claim ht the sentence ‘Paris is beautiful’ is true, amounting to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence. If we wish, as knowing its truth-condition, but this gives us no substitute account of understanding whatsoever. Something other than grasping the truth conditions must provide the substantive account. The charge rests on or upon what has been the redundancy theory of truth. The minimal theory states that the concept of truth is exhaustively by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories, accept that e equivalence principle, as e distinguishing feature of the minimal theory, its claim that the equivalence principle exhausts the notion of truth. It is, however, widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both the minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth condition. The minimal theory of truth has been endorsed by Ramsey, Ayer, and later Wittgenstein, Quine, Strawson, Horwich and ~ confusingly and inconsistently of Frége himself.

The minimal theory treats instances of the equivalence principle as definitional truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as

‘London is beautiful’ is true if and only if

London is beautiful

Can be explained are precisely A1 and A3 in that, this would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does? But that is very implausible: It is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something that is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal theory thus treats as definitional or stimulative something that is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth that has, among the many links that hold it in place, systematic connections with the semantic values of subsentential expressions.

A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truths that go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-language, or whatever. Then the equivalence schemata will not cover all cases, but only those in the theorist’s own language. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independent propositions or thoughts will only post-pone, not avoid, this issue, since at some point principles have to be stated associating these language-dependent entities with sentences of particular languages. The defender of the minimalist theory is that the sentence ‘S’ of a foreign language is best translated by our sentence, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are pervasive plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individuating account of any concept that there exists of what is called, the ‘Determination Theory’ for that account ~ that is, a specification on how the account contributes to fixing the semantic value of that concept. The notion of a concept’s semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.

It is, also, plausible that there are general constraints on the form of such Determination Theories, constrains which to involve truth and which are not derivable from the minimalist‘s conception. Suppose that concepts are individuated by their possession condition. A possession condition may in various ways make a thinker’s possession of a particular concept dependent upon his relation to his environment. Many possession conditions will mention the links between accept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation to what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, to mention of such experiences in a possession condition dependent in part upon the environmental relations of the thinker. Evan though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Its alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition that must be satisfied a thinker is to posses that concept and to be capable of having beliefs and other altitudes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individualized by this condition: It is the unique concept ‘C’ to posses that a thinker has to find these forms of inference compelling, without basting them on any further inference or information: From any two premises ‘A’ and ‘B’, ACB can be inferred and from any premise s a relatively observational concepts such as; round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement that individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession conditions for ‘and’ do not. We can also expect to use relatively observational concepts in specifying the kind of experience that have to be mentioned in the possession conditions for relatively observational; Concepts that we must avoid are mention of the concept in question as such within the content of the attitude attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account that was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering of the others. Two of the families that plausibly have this status are these: The family consisting of same simple concepts 0, 1. 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers, ‘there are o so-and-so’s, there is 1 so-and- so’s, . . . and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holist’s’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form, belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to posses they are to meet such-and-such condition involving the thinker, C1 and C2. For those other possession conditions to individuate properly. It is necessary that there be some ranking of the concepts treated. The possession condition or concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may by its various avenue’s make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to te subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession f that concept relations tn the thicker. Burge (1979) has also argued from intuitions about particular examples that even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Once, again, some general principles involving truth can, as Horwich has emphasized, be derived from the equivalence schemata using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true if and only if Paris is beautiful and London is beautiful. But no logical manipulations of the equivalence e schemata will allow the derivation of that general constraint governing possession condition, truth and assignment of semantic values. That constraints can, of course, be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

What is to a greater extent, but to consider the other question, for ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the above axiom A6 for conjunctions? This question may be addressed at two depths of generality. A shallower of levels, in this question may take for granted the person’s possession of the concept of conjunction, and be concerned with what hast be true for the axiom to describe his language correctly. At a deeper level, an answer should not sidestep the issue of what it is to posses the concept. The answers to both questions are of great interest.

When a person means conjunction by ‘and’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of deriving a theorem from a truth theory at some level of unconscious processing? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, the particular works as befitting Davies and Evans, whereby a conception has evolved according to which an axiom like A6, is true of a person’s component in the explanation of his understanding of each sentence containing the words ‘and’, a common component that explains why each such sentence is understood as meaning something involving conjunction. This conception can also be elaborated in computational; terms: As alike to the axiom A6 to be true of a person’s language is for the unconscious mechanism, which produce understanding to draw on the information that a sentence of the form ‘A and B’ are true only if ‘A’ is true and ‘B’ is true. Many different algorithms may equally draw on or open this information. The psychological reality of a semantic theory thus is to involve, Marr’s (1982) given by classification as something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonological theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithm that the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantic, syntactic and phonological theories are answerable to psychological data, in which are potentially refutable by them ~ for these linguistic theories does make commitments to the information drawn on or upon by mechanisms in the language user.

This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn upon is that those sentences of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and the computational answer we have returned needs further elaboration, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to argue that it has to draw upon a theory if the conditions for possessing a given concept. It is plausible that the concept of conjunction is individuated by the following condition for a thinker to have possession of it:

The concept ‘and’ is that concept ‘C’ to possess which a

Thinkers must meet the following conditions: He finds inferences

of the following forms compelling, does not find them

compelling as a result of any reasoning and finds them

compelling because they are of their forms:

When axiom A6 is true of a person’s language, there is a global dovetailing between this possessional condition for the concept of conjunction and certain of his practices involving the word ‘and’. For the case of conjunction, the dovetailing involves at least this:

If the possession condition for conjunction entails that the

thinker who possesses the concept of conjunction must be

willing to make certain transitions involving the thought p&q,

and of the thinker’s semitrance ‘A’ means that ‘p’ and his

Sentence ‘B’ means that ‘q’ then: The thinker must be willing

to make the corresponding linguistic transition involving

Sentence ‘A and B’.

This is only part of what is involved in the required dovetailing. Given what wee has already said about the uniform explanation of the understanding of the various occurrences of a given word, we should also add, that there is a uniform (unconscious, computational) explanation of the language user’s willingness to make the corresponding transitions involving the sentence ‘A and B’.

This dovetailing account returns an answer to the deeper questions because neither the possession condition for conjunction, nor the dovetailing condition that builds upon the dovetailing condition that builds on or upon that possession condition, takes for granted the thinker’s possession of the concept expressed by ‘and’. The dovetailing account for conjunction is an instance of a general schema, with which can be applied to any concept. The case of conjunction is of course, exceptionally simple in several respects. Possession conditions for other concepts will speak not just of inferential transitions, but of certain conditions in which beliefs involving the concept in question is accepted or rejected, and the corresponding dovetailing condition will inherit these features. This dovetailing account has also to be underpinned by a general rationale linking contributions to truth conditions with the particular possession condition proposed for concepts. It is part of the task of the theory of concepts to supply this in developing Determination Theories for particular concepts.

In some cases, a relatively clear account is possible of how a concept can feature in thoughts that may be true though unverifiable. The possession condition for the quantificational concept all natural numbers can in outline run thus: This quantifier is that concept Cx . . . x . . . to posses that the thinker has to find any inference of the form



CxFx



Fn.



Compelling, where ‘n’ is a concept of a natural number, and does not have to find anything else essentially containing Cx . . . x . . . compelling. The straightforward Determination Theory for this possession condition is one on which the truth of such a thought CxFx is true only if all natural numbers are ‘F’. That all natural numbers are ‘F’ is a condition that can hold without our being able to establish that it holds. So an axiom of a truth theory that dovetails with this possession condition for universal quantification over the natural numbers will be component of a realistic, non-verifications theory of truth conditions.

Finally, this response to the deeper questions allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory correct rather than another, when the two axioms assigned the same semantic values, but do so by different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level, of what it is for each axiom to be correct for a person’s language will be different accounts. Second, there is a challenge repeatedly made by the minimalist theories of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to that expression. The combined accounts for each of the expressions that comprise a given sentence together constitute a non-circular account of what it is to understand the complete sentence. Taken together, they allow theorist of meaning as truth-conditions fully to meet the challenge.

A widely discussed idea is that for a subject to be in a certain set of content-involving states, for attribution of those state s to make the subject as rationally intelligible. Perceptions make it rational for a person to form corresponding beliefs. Beliefs make it rational to draw certain inferences. Belief and desire make rational the formation of particular intentions, and the performance e of the appropriate actions. People are frequently irrational of course, bu t a governing ideal of this approach is that for any family of contents, there is some minimal core of rational transitions to or from states involving them, a core that a person must respect of his states is to be attributed with those contents at all. We contrast what we want do with what we must do ~ whether for reasons of morality or duty, or even for reasons of practical necessity (to get what we wanted in the first place). Accordingly, our own desires have seemed to be the principal actions that most fully express our own individual natures and will, and those for which we are personally most responsible. But desire has also seemed to be a principle of action contrary to and at war with our better natures, as rational and or agents. For it is principally from our own differing perspectives upon what would be good, that each of us wants what he does, each point of view being defined by one’s own interests and pleasure. In this, the representations of desire are like those of sensory perception, similarly shaped by the perspective of the perceiver and the idiosyncrasies of the perceptual dialectic about desire and its object recapitulates that of perception and sensible qualities. The strength of desire, for instance, varies with the state of the subject more or less independently of the character, where the actual utility of the object may or not be wanted. Such facts cast doubt on the ‘objectivity’ of desire, and on the existence of correlative properties of ‘goodness’, inherent in the objects of our desires, and independent of them. Perhaps, as the Dutch Jewish rationalist (1632-77) Benedictus de Spinoza put it, it is not that we want what we think good, but that we think good what we happen to want ~ the ‘good’ in what we want being a mere shadow cast by the desire for it. (There is a parallel Protagorean view of belief, similar ly sceptical of truth). The serious defence of such a view, however, would require a systematic reduction of apparent facts about goodness to fats about desire, and an analysis of desire that in turn makes no reference to goodness. While what is yet to be provided, moral psychologists have sought to vindicate an idea of objective goodness. For example, as what would be good from all points of view, or none, or, in the manner of the German philosopher Immanuel Kant, to establish another principle (the will or practical reason) conceived as an autonomous source of action, independent of desire or its object: And this tradition has tended to minimize the role of desire in the genesis of action.

Ascribing states with content on an actual person has to proceed simultaneously with attributions of as wide range of non-rational states and capacities. Overall, we cannot by nature, legitimately understand a persons reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, and how he reasons beyond the confines to minimal rationality. Even the content-involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world for being a certain way is not (and could not be) under a subject’s rational control. Thought it is true and important that perceptions give reason for forming beliefs, the beliefs for which they fundamentally provide reasons ~ observational beliefs about the environment ~ have contents that can only be elucidated by referring to perceptual experience. In this respect (as in others), perceptual states differ from beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: or frequently these latter judgements and actions can be individuated without reference back to the states that provide for them.

What is the significance for theories of content of the fact that it is almost certainly adaptive for members of as species to have a system of states with representational contents that are capable of influencing their actions appropriately? According to teleological theories a content, a constitutive account of content ~ one that says what it is for a state to have a given content ~ must make user of the notion of natural function and teleology. The intuitive idea is that for a belief state to have a given content ‘p’ is for the belief-forming mechanisms that produced it to have the unction as, perhaps, the derivatively of producing that stare only when it is the case that ‘p’. One issue this approach must tackle is whether it is really capable of associating with states the classical, realistic, verification-transcendent contents that, pre-theoretically, we attribute to them. It is not clear that a content’s holding unknowably can influence the replication of belief-forming mechanisms. But if content itself proves to resist elucidation, it is still a very natural function and selection. It is still a very attractive view that selection, it is still a very attractive view, that selection must be mentioned in an account of what associates something ~ such as a sentence ~ with a particular content, even though that content itself may be individuated by other means.

Content is normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequence and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of ‘perceptual content’ is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances and directions from the perceiver’s body as origin, such contents lack any sentence-like structure at all. Supporters of the view that all content is conceptual will argue that the legitimacy of using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Friends of conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial type which lack sentence-like structure.

Content-involving states are actions individuated in party reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that the building over there is a cinema showing it makes rational the action of walking in the direction of that building.

However, in the general philosophy of mind, and more recently, desire has received new attention from those who understand mental states in terms of their causal or functional role in their determination of rational behaviour, and in particular from philosophers trying to understand the semantic content or intentional character of mental states in those terms as ‘functionalism’, which attributes for the functionalist who thinks its relating to the mind or mental states and even as a causally mediating between a subject’s sensory information and that subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that makes a mental state the type of state it is ~ in pain, a smell of violets, a belief that the koala (an arboreal Australian marsupial (Phascolarctos cinereus), is dangerous ~ is the functional relation it bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.

In the general philosophy of mind, and more recently, desire has received new attention from those who would understand mental stat n terms of their causal or functional role in the determination of rational behaviour, and in particularly from philosophers trying to understand the semantic content or the intentionality of mental states in those terms.

Conceptual (sometimes computational, cognitive, causal or functional) role semantics (CRS) entered philosophy through the philosophy of language, not the philosophy of mind. The core idea behind the conceptual role of semantics in the philosophy of language is that the way linguistic expressions are related to one another determines what the expressions in the language mean. There is a considerable affinity between the conceptual role of semantics and structuralist semiotics that has been influence in linguistics. According to the latter, languages are to be viewed as systems of differences: The basic idea is that the semantic force (or, ‘value’) of an utterance is determined by its position in the space of possibilities that one’ language offers. Conceptual role semantics also has affinities with what the artificial intelligence researchers call ‘procedural semantics’, the essential idea here is that providing a compiler for a language is equivalent to specifying a semantic theory of procedures that a computer is instructed to execute by a program.

Nevertheless, according to the conceptual role of semantics, the meaning of a thought I determined by the thought’s role in a system of states, to specify a thought is not to specify its truth or referential condition, but to specify its role. Walter and his recipient twin-Walter’s cogitation of deliberate thoughts, though different truth and referential conditions, share the same conceptual role, and it is by virtue of this commonality that they behave type-identically. If Water and twin-Walter each have a belief that he would express by ‘water quenches thirst’ the conceptual role of semantics can be explained in that their falling droplets in their cans, where it goes into H2O and XYZ respectfully. Thus the conceptual role of semantics would seem, though not to Jerry Fodor, who rejects of the conceptual role of semantics for both external and internal problems.

Nonetheless, if, as Fodor contents, thoughts have recombinable linguistic ingredients, then, of course, for the conceptual role of semantic theorists are brought with questions that arise at work in the functional role of expressing the language of thought as well as in the public language we speak and write. And, according, the conceptual character of semantic theorbists in the divide of not only over their aim, but also about conceptual roles as in the semantic’s befitting its domain. Two questions avail themselves. Some hold that public meaning is somehow derivative (or inherited) from an internal mental language (mentalese) and that a mentalese expression has autonomous meaning (partly). So, for example, the inscriptions on this leaf requisites an understanding of translation, or, at least, transliterations. Into the language of thought: Representations in the brain require no such translation or transliteration. Others hold that the language of thought is just public language internalized and that it is expressions (or primary) meaning in virtue of their conceptual role.

After one decides upon the aims and the proper province of the conceptual role for semantics, the relations among expressions ~ public or mental ~ lay the groundwork for their conceptual roles. Because most conceptual roles of semantics as theorists leave the notion of the role in the conceptuality as a blank cheque, the options are open-ended. The conceptual role of a [mental] expression might be its causal association: Any dispositions too token or example, utter or think on the expression ‘ℯ’ when tokening another ‘ℯ’ or ‘a’ an ordered n-tuple < ℯ’ ℯ’‘, . . . >, or vice versa, can count as the conceptual role of ‘ℯ’. A more common option is characterologically conceptual, and its role is not causal but inferentially (these need compatible, contingent upon one’s attitude about the naturalization of inference): The conceptual role of an expression ‘ℯ’ in ‘L’ might consist of the set of actual and potential inferences form ‘ℯ’, or, as a more common, the ordered pair consisting of these two sets. Or, if sentences have non-derived inferential roles, what would it mean to talk of the inferential role of words? Some have found it natural to think of the inferential role of as words, as represented by the set of inferential roles of the sentence in which the word appears.

The expectation of expecting that one sort of thing could serve all these tasks went hand in hand with what have become known as the ‘Classical View’ of concepts, according to which they had an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction, which are known to any competent user of them. The standard example is the especially simple one of the [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but analysis was traditionally thought to be [justified true belief].

This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing it is -, i.e., in virtue of what is a bachelor is a bachelor? ~ And it does so in a way that supports counter-factual: It tells us what would satisfy the conception situations other than the actual ones (although all actual bachelors might turn out to be freckled, it’s possible that there might be unfreckled ones, since the analysis does not exclude that). The view also seems to offer an answer to an epistemological question of how people seem to know a priori (or independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or possession) conditions of a concept that they know its analysis, at least on reflection.

The Classic View, however, has alway ss had to face the difficulty of primitive concepts: Its all right to claim that competence consists in some sort of mastery of a definition, but what about the primitive concept in which a process of definition must ultimately end: Here the British Empiricism of the seventeenth century began to offer a solution: All the primitives were sensory, indeed, they expanded the Classical View to include the claim, now often taken uncritically for granted in the discussions of that view, that all concepts are ‘derived from experience’:’Every idea is derived from a corresponding impression’, in the work of John Locke (1632-1704), George Berkeley (1685-1753) and David Hume (1711-76) was often thought to mean that concepts were somehow composed of introspectible mental items ~ ‘images’, ‘impressions’ ~ that was ultimately decomposable into basic sensory parts. Thus, Hume analysed the concept of [material object] as involving certain regularities in our sensory experience and [cause] as involving spatio-temporal contiguity ad constant conjunction.

The Irish ‘idealist’ George Berkeley, noticed a problem with this approach that every generation has had to rediscover: If a concept is a sensory impression, like an image, then how does one distinguish a general concept [triangle] from a more particular one ~ say, [an isosceles triangle] ~ that would serve in imagining the general one. More recently, Wittgenstein (1953) called attention to the multiple ambiguity of images. And in any case, images seem quite hopeless for capturing the concepts associated with logical terms (what is the image for negation or possibility?) What ever the role of such representation, full conceptual competency must involve something more.

Conscionably, in addition to images and impressions and other sensory items, a full account of concepts needs to consider is of logical structure. This is precisely what the logical positivist did, focussing on logically structured sentences instead of sensations and images, transforming the empiricist claim into the famous ‘Verifiability Theory of Meaning’, the meaning of s sentence is the means by which it is confirmed or refuted, ultimately by sensory experience the meaning or concept associated with a predicate is the means by which people confirm or refute whether something satisfies it.

This once-popular position has come under much attack in philosophy in the last fifty years, in the first place, few, if any, successful ‘reductions’ of ordinary concepts (like [material objects] [cause] to purely sensory concepts have ever been achieved. Our concept of material object and causation seem to go far beyond mere sensory experience, just as our concepts in a highly theoretical science seem to go far beyond the often only meagre evidence we can adduce for them.

The American philosophers of mind Jerry Alan Fodor and LePore (1992) have recently argued that the arguments for meaning holism are, however less than compelling, and that there are important theoretical reasons for holding out for an entirely atomistic account of concepts. On this view, concepts have no ‘analyses’ whatsoever: They are simply ways in which people are directly related to individual properties in the world, which might obtain for someone, for one concept but not for any other: In principle, someone might have the concept [bachelor] and no other concepts at all, much less any ‘analysis’ of it. Such a view goes hand in hand with Fodor’s rejection of not only verificationist, but any empiricist account of concept learning and construction: Given the failure of empiricist construction. Fodor (1975, 1979) notoriously argued that concepts are not constructed or ‘derived’ from experience at all, but are and nearly enough as they are all innate.

The deliberating consideration about whether there are innate ideas is much as it is old, it, nonetheless, takes from Plato (429-347 Bc) in the ‘Meno’ the problems to which the doctrine of ‘anamnesis’ is an answer in Plato’s dialogue. If we do not understand something, then we cannot set about learning it, since we do not know enough to know how to begin. Teachers also come across the problem in the shape of students, who cannot understand why their work deserves lower marks than that of others. The worry is echoed in philosophies of language that see the infant as a ‘little linguist’, having to translate their environmental surroundings and grasp on or upon the upcoming language. The language of thought, is by hypothesis, especially associated with Fodor, is that of the thing related to the mind as they might in processing come about a language as different from one’s ordinary native language, as things are foreign, that they do things differently there. But underlying and explaining our competence with it. The idea is a development of the Chomskyan notion of an innate universal grammar. It is a way of drawing the analogy between the workings of the brain or mind and those of the standard computer, since computer programs are linguistically complex sets of instruments whose execution explains the surface behaviour of computers. As an explanation of ordinary language has not found universal favour. It apparently only explains ordinary representational powers by invoking innate things of the same sort, and it invites the image of the learning infant translating the language whose own powers are a mysterious a biological given.

René Descartes (1596-1650) and Gottfried Wilhelm Leibniz (1646-1716), defended the view that mind contains innate ideas: Berkeley, Hume and Locke attacked it. In fact, as we now conceive the great debate between European Rationalism and British Empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central disagreement: Rationalist typically claim that knowledge is impossible without a significant stoke of general innate concepts or judgements: Empiricist argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexity in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory.

Some of the philosophers may be cognitive scientist other’s concern themselves with the philosophy of cognitive psychology and cognitive science. Since the inauguration of cognitive science these disciplines have attracted much attention from certain philosophes of mind. The attitudes of these philosophers and their reception by psychologists vary considerably. Many cognitive psychologists have little interest in philosophical issues. Cognitive scientists are, in general, more receptive.

Fodor, because of his early involvement in sentence processing research, is taken seriously by many psycholinguists. His modularity thesis is directly relevant to question about the interplay of different types of knowledge in language understanding. His innateness hypothesis, however, is generally regarded as unhelpful. And his prescription that cognitive psychology is primarily about propositional attitudes is widely ignored. The American philosopher of mind, Daniel Clement Dennett (1942- )whose recent work on consciousness treats a topic that is highly controversial, but his detailed discussion of psychological research finding has enhanced his credibility among psychologists. In general, however, psychologists are happy to get on with their work without philosophers telling them about their ‘mistakes’.

Connectionmism has provided a somewhat different reaction mg philosophers. Some ~ mainly those who, for other reasons, were disenchanted with traditional artificial intelligence research ~ have welcomed this new approach to understanding brain and behaviour. They have used the success, apparently or otherwise, of connectionist research, to bolster their arguments for a particular approach to explaining behaviour. Whether this neuro-philosophy will eventually be widely accepted is a different question. One of its main dangers is succumbing to a form of reductionism that most cognitive scientists and many philosophers of mind, find incoherent.

One must be careful not to caricature the debate. It is too easy to see the debate as one pitting innatists, who argue that all concepts of all of linguistic knowledge are innate (and certain remarks of Fodor and of Chomsky lead themselves in this interpretation) against empiricist who argue that there is no innate cognitive structure in which one need appeal in explaining the acquisition of language or the facts of cognitive development (an extreme reading of the American philosopher Hilary Putnam 1926-. But this debate would be a silly and a sterile debate indeed. For obviously, something is innate. Brains are innate. And the structure of the brain must constrain the nature of cognitive and linguistic development to some degree. Equally obvious, something is learned and is learned as opposed too merely grown as limbs or hair growth. For not all of the world’s citizens end up speaking English, or knowing the Relativity Theory. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned and to what degree its content and structure are determined by innately specified cognitive structure. And that is a great deal to debate.

The arena in which the innateness takes place has been prosecuted with the greatest vigour is that of language acquisition, and it is an appropriate to begin there. But it will be extended to the domain of general knowledge and reasoning abilities through the investigation of the development of object constancy ~ the disposition of concept or physical objects as persistent when unobserved and to reason more or less of their properties, wherefore the locations are not perceptible.

The most prominent exponent of the innateness hypothesis in the domain of language acquisition is Chomsky (1296, 1975). His research and that of his colleagues and students is responsible for developing the influence and powerful framework of transformational grammar that dominates current linguistic and psycholinguistic theory. This body of research has amply demonstrated that the grammar of any human language is a highly systematic, abstract structure and that there are certain basic structural features shared by the grammars of all human language s, collectively called ‘universal grammar’. Variations among the specific grammars of the world’s ln languages can be seen as reflecting different settings of a small number of parameters that can, within the constraints of universal grammar, takes may have several different valued. All of type principal arguments for the innateness hypothesis in linguistic theory on this central insight about grammars. The principal arguments are these: (1) The argument from the existence of linguistic universals, (2) the argument from patterns of grammatical errors in early language learners: (3) The poverty of the stimulus argument, (4) the argument from the case of fist language learning (5) the argument from the relative independence of language learning and general intelligence, and (6) The argument from the moduarity of linguistic processing.

Innatists argue (Chomsky 1966, 1975) that the very presence of linguistic universals argue for the innateness of linguistic of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, for communicative efficiency, or as for any plausible simplicity reflectively adventitious. These are many conceivable grammars, and those determined by universal grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human languages satisfy the constraints of universal grammar. Since either the communicative environment or the communicative tasks can explain this phenomenon. It is reasonable to suppose that it is explained by the structures of the mind ~ and therefore, by the fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.

Hilary Putnam argues, by appeal to a common-sens e ancestral language by its descendants. Or it might turn out that despite the lack of direct evidence at present the features of universal grammar, in fact do serve either the goals of commutative efficacy or simplicity according in a metric of psychological importance. Finally, an empiricist points out, the very existence of universal grammar might be a trivial logical artefact: For one thing, many inffinitary sets of structures whether some features have of some commonality. Since there are some finite numbers of languages, it follows trivial that there are features they all share. Moreover, it is argued that many features of universal grammar are interdependent. On one, in fact, the set of fundamentally the same mental principle shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount not of innate knowledge thereby, required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.

These rely on or upon the rendering plausibility as Innatists argue, when one considers the fact that the error’s language learners make of acquiring their first language seem to be driven far more by abstract features of gramma r than by any available input data. So, despite receiving correct examples of irregular plurals or past-tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners but what is more important, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions always consistent with universal gramma r, often simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue (Chomsky 1966, 197 & Crain, 1991) all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguistics and psycholinguistics argue that all known grammatical rules of all of the world’s languages, including the fragmentary languages of young children must be started as rules governing hierarchical sentence structure, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children. Such constrain may, innatists argue, be necessary conditions of learning natural language without specific instruction, modelling and correct, conditions in which all first language learners acquire their native language.

An important empiricist relies upon these observations deriving from recent studies of ‘conceptionist’ models of first language acquisition, for which of a ‘connection system’, not previously trained to represent any subset universal grammar that induce grammar that include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquire learning systems that induce grammatical systems acquire ‘accidental’ rules on which they are not explicitly trained but which are not explicit with those upon which they are trained, suggesting, that as children acquire portions of their grammar, they may accidentally ‘learn’ correct consistent rules, which may be correct in human languages, but which then must be ‘unlearned’ in their home language. On the other hand, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficient wide range of the rules hypothesize to be comprised by universal grammar to constitute a definitive empirical argument for the possibility of natural language acquisition lacking a powerful set of innate constraints.

The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is controversial. Chomsky notes that (1) the examples of their targe language to which the language learner is exposed are always jointly compatible with an infinite number of alterative grammars, and so vastly under-determine the grammar of the language, and (2) The corpus always contains many examples of ungrammatical sentences, which should in fact serve as falsifiers of any empirically induced correct grammar of the language, and (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either by the learner or by those in the immediate training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar ~ a task accomplished b all normal children within a very few year’s ~ on the basis of any available data or known learning algorithms, it must be ta the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.

Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that the American linguistic, philosopher and political activist, Noam Avram Chomsky (1929- ), who believes that the speed with which children master their native language cannot be explained by learning theory, but requires acknowledging an innate disposition of the mind, an unlearned, innate and universal grammar, suppling the kinds of rule that the child will a priori understand to be embodied in examples of speech with which it is confronted in computational terms, unless the child came bundled with the right kind of software. It cold not catch on to the grammar of language as it in fact does.

As it is wee known from arguments due to the Scottish philosopher David Hume (1978, the Austrian philosopher Ludwig Wittgenstein (1953), the American philosopher Nelson Goodman (1972) and the American logician and philosopher Aaron Saul Kripke (1982), that in all cases of empirical abduction, and of training in the use of a word, data underdetermining the theories. The is moral is emphasized by the American philosopher Willard van Orman Quine (1954, 1960) as the principle of the undetermined theory by data. But we, nonetheless, do abduce adequate theories in silence, and we do learn the meaning of words. And it could be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.

But, innatists rely, when the empiricist relies on the underdermination of theory by data as a counter-example, a significant disanalogy with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberated effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstract domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.

Empiricist such as the American philosopher Hilary Putnam (1926- ) have rejoined that innatists under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during h time. That number is in fact quite large and is comparable to the number of hours of study and practice required the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition. Hence, they are taken into consideration, and language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.

Innatists, however, note that while the case with which most such skills are acquired depends on general intelligence, language is learned with roughly equal speed, and too roughly the same level of general intelligence. In fact even significantly retarded individuals, assuming special language deficit, acquire their native language on a tine-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty, hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner.

Empiricists rebuttal in that this argument ignores the centrality of language in a wide range of human activities and consequently the enormous attention paid to language acquisition by retarded youngsters and their parents or caretakers. They argue as well, that innatists overstate the parity in linguistic competence between retarded children and children of normal intelligence.

Innatists point out that the ‘modularity’ of language processing is a powerful argument for the innateness of the language faculty. There is a large body of evidence, innatists argue, for the claim that the processes that subserve the acquisition, understanding and production of language are quite distinct and independent of those that subserve general cognition and learning. That is to say, that language learning and language processing mechanisms and the knowledge they embody are domain specific ~ grammar and grammatical learning and utilization mechanisms are not used outside of language processing. They are informationally encapsulated ~ only linguistic information is relevant to language acquisition and processing. They are mandatory, and language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictable and systematically impairs linguistic functioning. All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure is organ simultaneously constrains the range of possible human language s and guide the learning of a child’s target language, later masking rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to soundstreams that are prosodically appropriate, which have pauses at clausal boundaries, and that contain linguistically permissible phonological sequence.

It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger than black holes. But on this matter, the languages of knowing-theories have little to say. However, ‘concept’ learning could be (assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head). According to the language of a thought, is that the theorist must try out some of the combinations of existing representational elements, if just to see if a given combination captures the sense (as evinced in its use) of some new concept. The consequence is that concept learning, conceived as the expansion of our representational resources, simply does not happen. What happens instead is that the work with a fixed, innate repertoire of elements whose combination and construction must express any content we can ever learn to understand.

Representationalist typifies the conforming generality for which of its inclusive manner that mostly induce the doctrine that the mind (or sometimes the brain) works on representations of the things and features of things that we perceive or things about. In the philosophy of perception the view is especially associated with the French Cartesian philosopher Nicolas Malebranche (1638-1715) and the English philosopher John Locke (1632-1704), who, holding that the mind is the container for ideas, held that of our real ideas, some are adequate, and some are inadequate. Those that have an inadequacy to those represented as archetypes that the mind supposes them taken from which it tends them to stand for, and to which it refers them. The problem in this account were mercilessly exposed by the French theologian and philosopher Antoine Arnauld (1216-94) and the French critic of Cartesianism Simon Foucher (1644-96), writing against Malebranche, and by the idealist George Berkeley, writing against Locke. The fundamental problem is that the mind is ‘supposing’ its ds to represent something else, but it has no access to this something else, except by forming another idea. The difficulty is to understand how the mind ever escapes from the world of representations, or, acquire genuine content pointing beyond them in more recent philosophy, the analogy between the mind and a computer has suggested that the mind or brain manipulate signs and symbols, thought of as like the instructions in a machine’s program of aspects of the world. The point is sometimes put by saying that the mind, and its theory, becomes a syntactic engine rather than a semantic engine. Representation is also attacked, at least as a central concept in understanding the ‘pragmatists’ who emphasize instead the activities surrounding a use of language than what they see as a mysterious link between mind and world.

Representations, along with mental states, especially beliefs and thought, are said to exhibit ‘intentionality’ in that they refer to stand for something or other than of what is the possibility of it being something else. The nature of this special property, however, has seemed puling. Not only is intentionality oftentimes assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it demotes, and, yet it remains for Iconic representation.

Early attempts tried to establish the link between sign and object via the mental states of the sign and symbol’s user. A symbol # stands for ✺ for ‘S’ if it triggers a ✺-idea in ‘S’. On one account, the reference of # is the ✺idea itself. Open the major account, the denomination of # is whatever the ✺-idea denotes. The first account is problematic in that it fails to explain the link between symbols and the world. The second is problematic in that it just shifts the puzzle inward. For example, if the word ‘table’ triggers the image ‘‒’ or ‘TABLE’ what gives this mental picture or word any reference of all, let alone the denotation normally associated with the word ‘table’?

An alternative to these Mentalistic theories has been to adopt a behaviouristic analysis. Wherefore, this account # denotes ✺ for ‘S’ is explained similar to either (1) ‘S’ is disposed to behave to # as to ✺: , or (2) ‘S’ is disposed to behave in ways appropriate to ✺ when presented #. Both versions prove faulty in that the very notions of the behaviour associated with or appropriate to ✺ are obscure. In addition, once seems to be no reasonable correlations between behaviour toward sign and behaviour toward their objects that is capable of accounting for the referential relations.

A currently influential attempt to ‘naturalize’ the representation relation takes its use from indices. The crucial link between sign and object is established by some causal connection between ✺ and #, whereby it is allowed, nonetheless, that such a causal relation is not sufficient for full-blown intention representation. An increase in temperature causes the mercury to raise the thermometer but the mercury level is not a representation for the thermometer. In order for # to represent ✺ to S’s activities. The flunctuational economy of S’s activity. The notion of ‘function’, in turn is yet to be explained along biological or other lines so as to remain within ‘naturalistic’ constraints for being natural. This approach runs into problems in specifying a suitable notion of ‘function’ and in accounting for the possibility of misrepresentation. Also, it is no obvious how to extend the analysis to encompass the semantical force of more abstract or theoretical symbols. These difficulties are further compounded when one takes into account the social factors that seem to play a role in determining the denotative properties of our symbols.

The problems faced in providing a reductive naturalistic analysis of representation has led many to doubt that this task is achieved or necessary. Although a story can be told about some words or signs what were learned via association of other causal connections with their referents, there is no reason to believe ht the ‘stand-for’ relation, or semantic notions in general, can be reduced to or eliminated in favour of non-semantic terms.

Although linguistic and pictorial representations are undoubtedly the most prominent symbolic forms we employ, the range of representational systems human understand and regularly use is surprisingly large. Sculptures, maps, diagrams, graphs. Gestures, music nation, traffic signs, gauges, scale models, and tailor’s swatches are but a few of the representational systems that play a role in communication, though, and the guidance of behaviour. Even, the importance and prevalence of our symbolic activities has been taken as a hallmark of human.

What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? As for the first question, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, referring to or denoting’ something else. The major debates have been over the nature of this connection between a reorientation and that which it represents. As for the second question, perhaps, the most famous and extensive attempt to organize and differentiate among alternative forms of representation is found in the works of the American philosopher of science Charles Sanders Peirce (1839-1914) who graduated from Harvard in 1859, and apart from lecturing at John Hopkins university from 1879 to 1884, had almost no teaching, nonetheless, Peirce’s theory of signs is complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspects of his theory that remains influential and ie widely cited is his division of signs into Icons, Indices and Symbols. Icons are the designs that are said to be like or resemble the things they represent, e.g., portrait painting. Indices are signs that are connected in their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are used and related to their object by virtue of use or associations: They a arbitrary labels, e.g., the word ‘table’. This tripartite division among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representations, between representations that do and do not require learning to understand, and between representations, like language, that need to be read, and those that do not require interpretation. Some theorbists, moreover, have maintained that it is only the use of symbols that exhibits or indicates the presence of mind and mental states.

Over the years, this tripartite division of signs, although often challenged, has retained its influence. More recently, an alterative approach to representational systems (or as he calls them ‘symbolic systems’) has been put forth by the American philosopher Nelson Goodman (1906-98) whose classical problem of ‘induction’ is often phrased in terms of finding some reason to expect that nature is uniform, in Fact, Fiction, and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous, yet Goodman (1976) has proposed a set of syntactic and semantic features for categorizing representational systems. His theory provided for a finer discrimination among types of systems than a philosophy of science and language as partaken to and understood by the categorical elaborations as announced by Peirce. What also emerges clearly is that many rich and useful systems of representation lack a number of features taken to be essential to linguistic or sentential forms of representation, e.g., discrete alphabets and vocabularies, syntax, logical structure, inferences rules, compositional semantics and recursive e compounding devices.

As a consequence, although these representations can be appraised for accuracy or correctness. It does not seem possible to analyse such evaluative notion resembling standard truth theories, geared as they are to the structure found in sentential systems.

In light of this newer work, serious questions have been raised at the soundness of the tripartite division and about whether various of the psychological and philosophical claims concerning conventionality, learning, interpretation, and so forth, that have been based on this traditional analysis, can be sustained. It is of special significance e that Goodman has joined a number of theorists in rejecting accounts of Iconic representation in terms of resemblance. The rejection has ben twofold, first, as Peirce himself recognized, resemblance is not sufficient to establish the appropriate referential relations. The numerous prints of lithograph do not represent one another any more than an identical twin represent his or her sibling. Something more than resemblance is needed to establish the connection between an Icon and picture and what it represents. Second, since Iconic representations lack as may properties as they share with their referents, sand certain non-Iconic symbol can be placed vin correspondences with their referents. It is difficult to provide a non-circular account of what, but the similarity I at distinguishes Icons from other forms of representation. What is more, even if these two difficulties could be resolved, it would not show that the representational function of picture can be understood independently of an associated system of interpretations. The design, □, may be a picture of a mountain of the economy in a foreign language. Or it may have no representational significance at all. Whether it is a representation and what kind of representation it uses, is relative to a system of interpretation.

If so, then, what is the explanatory role of providing reasons for our psychological states and intentional acts? Clearly part of this role comes from the justificatory nature of the reason-giving relation: ‘Things are made intelligible by being revealed to be, or to approximate to being, as they rationally ought to be’. For some writers the justificatory and explanatory tasks of reason-giving simple coincide. The manifestation of rationality is seen as sufficient to explain states or acts quite independently of questions regarding causal origin. Within this model the greater the degree of rationality we can detect, the more intelligible the sequence will be. Where there is a breakdown in rationality, as in cases of weakness of will or self-deception, there is a corresponding breakdown in our ability to make the action/belief intelligible.

The equation of the justificatory and explanatory role of rationality links can be found within two quite distinct picture. One account views the attribute of rationality from a third-person perspective. Attributing intentional states to others, and by analogy to ourselves, is a matter of applying to them a certain pattern of interpretation. We ascribe that ever states enables us to make sense of their behaviour as conforming to a rational pattern. Such a mode of interpretation is commonly an ex post facto affair, although such a mode of interpretation can also aid prediction. Our interpretations are never definitive or closed. They are always open to revision and modification in the light of future behaviour. If such revision enable person as a whole to appear more rational. Where we fail to detect of seeing a system then we give up the project of seeing a system as rational and instead seek explanations of a mechanistic kind.

The other picture is resolutely firs-personal, linked to the claimed prospectively of rationalizing explanations we make an action, for example, intelligible by adopting the agent’s perspective on it. Understanding is a reconstruction of actual or possible decision making. It is from such a first-personal perspective that goals are detected as desirable and the courses of action appropriated to the situation. The standpoint of an agent deciding how to act is not that of an observer predicting the next move. When I found something desirable and judge an act in an appropriate rule for achieving it, I conclude that a certain course of action should be taken. This is different from my reflecting on my past behaviour and concluding that I will do ‘X’ in the future.

For many writers, it is, nonetheless, the justificatory and explanatory role of reason cannot simply be equated. To do so fails to distinguish well-formed cases thereby I believe or act because of these reasons. I may have beliefs but your innocence would be deduced but nonetheless come to believe you are innocent because you have blue eyes. Yet, I may have intentional states that give altruistic reasons in the understanding for contributing to charity but, nonetheless, out of a desire to earn someone’s good judgment. In both these cases. Even though my belief could be shown be rational in the light of other beliefs, and my action, of whether the forwarded belief become desirously actionable, that of these rationalizing links would form part of a valid explanation of the phenomena concerned. Moreover, cases inclined with an inclination toward submission. As I continue to smoke although I judge it would be better to abstain. This suggests, however, that the mere availability of reasoning cannot, least of mention, have the quality of being of itself an sufficiency to explain why it occurred.

If we resist the equation of the justificatory and explanatory work of reason-giving, we must look fora connection between reasons and action/belief in cases where these reasons genuinely explain, which is absent otherwise to mere rationalizations (a connection that is present when enacted on the better of judgements, and not when failed). Classically suggested, in this context is that of causality. In cases of genuine explanation, the reason-providing intentional states are applicable stimulations whose cause of holding to belief/actions for which they also provide for reasons. This position, in addition, seems to find support from considering the conditional and counter-factuals that our reason-providing explanations admit as valid, only for which make parallel those in cases of other causal explanations. Imagine that I am approaching the Sky Dome’s executives suites looking for the cafeteria. If I believe the café is to the left, I turn accordingly. If my approach were held steadfast for which the Sky Dome has, for itself the explanation that is simply by my desire to find the cafê, then without such a desire I would not have walked in the direction that led toward the executive suites, which were stationed within the Sky Dome. In general terms, where my reasons explain my action, then the presence to the future is such that for reasons were, in those circumstances, necessary for the action and, at least, made probable for its occurrence. These conditional links can be explained if we accept that the reason-giving link is also a causal one. Any alternative account would therefore also need to accommodate them.

The defence of the view that reasons are causes for which seems arbitrary, least of mention, ‘Why does explanation require citing the cause of the cause of a phenomenon but not the next link in the chain of causes? Perhaps what is not generally true of explanation is true only of mentalistic explanation: Only in giving the latter type are we obliged to give the cause of as cause. However, this too seems arbitrary. What is the difference between mentalistic and non-mentalistic explanation that would justify imposing more stringent restrictions on the former? The same argument applies to non-cognitive mental stares, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed: Each of us, through ‘introspection’, can observe at least some mental states, namely our own, least of mention, those of which we are conscious.

To this point, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. However, its probable traces are reclined of a historical coefficient of reflectivity as Aristotle’s similar (but not identical) distinction between final and efficient cause, engendering that (as a person, fact, or condition) which proves responsible for an effect. Recently, the contrast has been drawn primarily in the domain or the inclining inclinations that manifest some territory by which attributes of something done or effected are we to engage of actions and, secondarily, elsewhere.

Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider its reason for sending a letter by express mail. Asked why id so, I might say I wanted to get it there in a day, or simply, to get it there in as day. Strictly, the reason is repressed by ‘to get it there in a day’. But what this express to my reason only because I am suitably motivated: I am in a reason state, as wanting to get the letter there in a day. It is reason state’s especially wants, beliefs and intentions ~ and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes: The former are psychological elements that play motivational roles.

If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one can say that they are not events, at least in the usual sense entailing change, as they are dispositional states (this contrasts them with occurrences, but not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. As other claim is that the relation between reasons (and for reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.

These arguments are inconclusive, first, even if causes are events, sustaining causation may explain, as where the [states of] standing of a broken table is explained by the (condition of) support of staked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in day; is in some semi-causal explanation and would at best be construed as only rationalizing ~ than justifying action? And third, if any non-contingent connection can be established between, say, my wanting something and the action it explains, there are close causal analogism such as the connection between brining a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the fillings to move.

There I then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relations between reasons and the actions they justify are in no way, causal. Precisely parallel points hold in the epistemic domain (and indeed, for all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received it today is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably reason justifies the further proposition I believe for which it is my reason and my reason state ~ my evidence belief ~ both explains and justifies my belief that you received the letter today. I an say, that what justifies that belief is [in fact] that I sent the letter by express yesterday, but this statement expresses my believing that evidence proposition, and you received the letter is not justified, it is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).

Similarly, there are, for belief for action, at least five main kinds of reason (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a green-house-effect): (2) Person-relative normative reasons, reasons for [say] me to believe, (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe, and (5) motivating reasons for which I believe. Tenets of (1) and (2) are propositions and thus, not serious candidates to be causal factors. The states corresponding to (3) may not be causal elements. Reasons why, tenet (4) are always (sustaining) explainers, though not necessarily even prima facie justifier, since a belief can be casually sustained by factors with no evidential value. Motivating reasons are both explanatory and possess whatever minimal prima facie justificatory power (if any) a reason must have to be a basis of belief.

Current discussion of the reasons-causes issue has shifted from the question whether reason state can causally explain to the perhaps, deeper questions whether they can justify without so explaining, and what kind of causal states with actions and beliefs they do explain. ‘Reliabilist’ tend to take as belief as justified by a reason only if it is held at least in part for that reason, in a sense implying, but not entailed by, being causally based on that reason. ‘Internalists’ often deny this, as, perhaps, thinking we lack internal access to the relevant causal connections. But Internalists need internal access to what justified ~ say, the reason state ~ and not to the (perhaps quite complex) relations it bears the belief it justifies, by virtue for which it does so. Many questions also remain concerning the very nature of causation, reason-hood, explanation and justification.

Nevertheless, for most causal theorists, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way that they rationalize. One way of putting this requirement is that reason-giving states not only cause but also causally explain their explananda.

The general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore, embraces the traditional division of ‘semiotic into ‘syntax’, ‘semantics’, and;’pragmatics’. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It also mingles with the metaphysics of truth and the relationship with the metaphysics of truth and the relationship between sign and object. Much philosophy especially in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of ‘logical form’ and the basis of the division between ‘syntax’ and ‘semantics’, as well as problems of understanding the number and nature of specifically semantic relationships such as ‘meaning’, ‘reference’, ‘prediction’, and ‘quantification’. Pragmatics include the theory of ‘speech acts’, while problems of ‘rule following’ and the ‘indeterminacy of translation’ infect philosophies of both pragmatics and semantics.

There is no denying it. The language of thought hypothesis has a compelling neatness about it. A thought is depicted as a structure of internal representational elements, combined in a lawful way, and playing a certain functional role in an internal processing economy.

In the philosophy of mind, an adequate conception of mind and its relationship to matter should explain how it is possible for mental events to interact with the rest of the world, and in particular to themselves have a causal influence on the physical world. It is easy to think that this must be impossible: It takes a physical cause to have a physical effect. Yet, every day experience and theory alike show that it is commonplace. Consciousness could hardly have evolved if it had, had no uses. In general, it is a measure of the success of any theory of mind and body that it should enable us to avoid ‘epiphenomenalism’.

On the same course, the Scottish philosopher, historian and essayist David Hume (1711-76), said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arrow of time’ to analyse the directional ‘arrow of causation’. In that, it seems in principle possible that some causes and effects could be simultaneous. More of the essence, the idea that time is directed from ‘earlier’ to ‘later’ itself stands in need of philosophical explanation ~ and one of the most popular explanations is that the idea of ‘movement’ from earlier to depend on the fact later that cause-effect pairs always have a given orientation in time. Even so, if we adopt such a ‘casual theory of the arrow of time’, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, then we will clearly need to find some account of the direction of causality that does not itself assume the direction of time.

A number of such accounts have been proposed. The American philosopher David Lewis (1941-2002), has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events ~ consider a person who dies after simultaneously being shot and struck by lightning ~ is a very rare occurrence. By contrast, the multiple ‘over determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also his finger-print on the button, his trembling, the further depletion of his tonic and gin, the recording of the button’s click on tape, the emission of light from the passage of the signal current, and so on, and on, and on.

Lewis relates this asymmetry of over-determination to the asymmetry of causation as if we are to assume the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freaks like the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the cause. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following Reichenbach (1956), they note that the different causes of any given type of effect are normally probabilistically independent of each other: By contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both fat people are more likely to get excited than thin ones: The fact that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the; later, are probabilistically dependent on each other.

Even so, fundamental trajectories take upon the crescentic edge-horizons of ‘directedness’ or ‘aboutness’ of many, if not all, conscious states. The term was used by the ‘scholastics’, but revived in the 19th century by German philosopher and phytologist Franz Clemens Brentano (1838-1917). Our beliefs, thoughts, wishes, dreams, and desires are about things. Equally, the words we use to express these beliefs and other mental states are about things. The problem of intentionality is that of understanding the relation obtaining between a mental state, or its expression, and the things it is about. A number of peculiarities attend this relation. First, if I an in some relation to a chair, for instance by sitting on it, then both it and I must exist. But while mostly one thinks about things that exist, sometimes (although this way of putting it has its problems) ne has beliefs, hopes, and fears about things that do not, as when the child expects Santa Claus, and the child fears Zeus. Secondly, if I sit on the chair and the chair is the oldest antique in Toronto, then I it on the oldest antique in Toronto. But if I plan to avoid the mad axeman, and the mad axeman is in fact my friendly mail carrier, I do not therefore plan to avoid my friendly mail carrier. Intentional relations seem to depend on how the object is specified, or as Frége put it, on the mode of presentation of the object. This makes them quite the relations whose logic we can understand by means of the predicate calculus, and this peculiarity has implicated an unusual mental or emotional effect on those capable of reaction, especially those philosophers notably the American philosopher Willard van Orman Quine (1908-2000), who declared them unfit for use in serious science. More widespread is the view that since the concept is indispensable, we must either declare serious science unable to deal with the serious features of the mind, or explain how serious science may include intentionality. One approach in which we communicate fears and beliefs have a two-fold aspect, involving both the objects referred to, and the mode of presentation under which they are thought of, we can see the mind as essentially related to them, intentionality then becomes a feature of language, rather than a metaphysical or ontological peculiarity of the mental world.

The attitudes are philosophically puzzling because it is not easy to see how the intentionality of the attitudes fits with another conception of them, as local mental phenomena.

Beliefs, desires, hopes, and fears seem to be located in the heads or minds of the people that have them. Our attitudes are accessible to us through ‘introspection’. We think of attitudes for being caused at certain times by events that impinge on the subject’s body, specifically by perceptual events, such as reading a newspaper or seeing a picture of an ice-cream cone. Still, the psychological level of description carries with it a mode of explanation which ‘has no echo in physical theory’, wherefore, a major influence on philosophy of mind and language in the latter half of the 20th century brought Davidson to introduce the position known as ‘anomalous monism’ in the philosophy of mind, instigating a vigorous debate over the relation between mental and physical descriptions of persons, and the possibility of genuine explanation of events in terms of psychological properties. Following but enlarging upon the works of Quine on language, Davidson concentrated upon the figure of the ‘radical interpreter’, arguing that the method of interpreting a language could be thought of as constructing a ‘truth definition’ in the style of Alfred Tarski (1901-83), in which the systematic contribution of elements of sentences to their overall meaning is laid bare. The construction takes place within a generally holistic theory of knowledge and meaning. A radical interpreter can tell when a subject holds a sentence true, and using the principle of charity ends up making an assignment of truth conditions to individual sentences. Although Davidson is a defender of the doctrines of the ‘indeterminacy of radical translation and the ‘scrutability’ of reference, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broader extensional approach to language. Davidson is also known for rejection of the idea of a conceptual scheme, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate.

These attitudinal values can in turn cause in other mental phenomena, and eventually in the observable behaviour of the subject. Seeing the picture of an ice-cream cone leads to a desire for one, which leads me to forget the meaning I am supposed to attend and walk to the ice-cream sho instead. All of this seems to require that attitudes be states and activities that are localized in the subject.

But the phenomena of intentionality suggests that the attitudinal values are essentially relational in nature, they involve relations to the propositions at which they are directed and at the objects they are about. These objects may be quite remote from the minds of subjects. An attitudinal value seems to be individuated by the agent, the type of attitude (belief, desire, and so forth). It seems essential to the attitude reported by a role of assertion that it is directed toward the proposition that is directed propositionally proper.

Even so, the formulation ‘actions are doing that are intentional under some description’ reflects the minimizing view of the individuation of actions. The idea is that for what I did that count as an action, there must be a description ‘V-ing’ of what I did, such that I V’ d intentionally. Still, since (on the minimizing view) my causing the modification was the same even s my greeting you, and I greeted you intentionally, this event was an action. Or, suppose I did not know it was you on the phone, and thought it was my spouse. Still, I would have said ‘Good-morning’ intentionally, and that suffices for this event, however described to be an action. My snoring and involuntary coughing, nonetheless, are not intentional under any description, and so are not definite actions.

No matter, the standard confusion in the philosophical literature is to suppose that there is some special connection between intentionality-with-a-t, and intentionality-with-an-a, some authors even allege that these are identical. But, in fact, the two notions are quite distinct. Intentionality-with-a-t, is that property of the mind by which it is directed at, or is about objects and states of affairs in the world. Intentionality-with-an-s, is that phenomenon by which sentences fail to satisfy certain tests for extentionality.

There are many standard test for extentionality, but substitutability of identical two most common in the literature are substitutability of identicals and existential inference. The principle of substitutability states that referable expressions can be substituted for other without changing the truth value of the statement in which the substitution is made. The principle of existential inference states that any statement that contains a referring expression implies the existence of the object referred to, by that expression. But there are statements that do not satisfy these principles such statements are said to be intentional with respect to these tests for extentionality. An example is given as such from the statement that:

(1) The sheriff believes that Mr Howard is an honest man

And:

(2) Mr Howard is identical with the notorious outlaw, Jesse James

It does not follow that:

(3) The sheriff believes that the notorious outlaw, Jesse James, is an honest man.

This is a failure of the substitutability of identicals.

From the fact:

(4) Billy believes that Santa Claus will come on Christmas Eve

It does not follow that:

(5) There is some ‘x’ such that Billy believes ‘x’ will come on Christmas Eve.

This is a failure of existential inference. Thus, statements (1) and (4) fail tests for extentionality and hence are said to be intentional with respect to these tests.

A proper understanding of intentionality is crucial to the study of a number of topics in cognitive science, including perception, imagery, and consciousness. The term itself, intentionality, can be misleading, in suggesting intentional action, doing something intentionally, with a certain aim or purpose. In cognitive science, the term is used in a different, more technical sense. Intentionality involves reference or aboutness or some similar relation to something having what the scholastics of the Middle Ages called intentional inexistence.

When Ruth thinks of Wally K., as a cognitive scientist, the intentional object of her thought is Wally K., and the intentional content of her thought is that Wally K., is a cognitive scientist. She has a mental representation of him as a cognitive scientist. What Ruth thinks about has intentional inexistence in the sense that her thoughts may be wrong and she can have thoughts about things that do not even exist. She may think incorrectly that Wally K., is a computer scientist or even that Santa Claus is a computer scientist.

If you treat intentionality as a relation to an intentional object, you must remember that it is not a real relation in the way that kissing or touching is. A real relation holds between two existing things independently of how they are conceived. When a woman kisses a man and the man kisses her, kisses is bald, the woman kisses a bald man. But Ruth can think about a man who happens to be bald without thinking of him as bald: She may represent him as hairy. Similarly. Ruth can think of someone who does not exist but cannot kiss or touch someone who does no exist.

Looking for something is an example of an intentional activity in this technical sense of intentional as well as in the more ordinary sense having to do with what you are aiming at. You sometimes look for things that turn out not to exist. Ponce de Leon searched in Florida for the fountain of youth. Also, there was no such thing to be found.

There can be intentionality without representation. For example, needing something is an intentional phenomenon. The grass in my lawn can need water even though it is not going to get any and even if there is no water to give it. But the grass does not represent the water it needs.

Other examples of intentional phenomena include spoken and written language, gestures, representational paintings, photographs, films, road maps, and traffic lights. It is controversial how these last instances of intentionality are related to the intentionality of thoughts and other cognitive states.

Nonexistent intentional objects like Santa Claus and the fountain of youth raise difficult logical puzzles if taken seriously as objects. What properties do they have? What sorts of properties does Santa Claus have, as he in conceived by a certain child? Perhaps he is fat, lives at the North pole, dresses in red, drives a sleigh, brings presents to children at Christmas time, and has in at least, eight reindeer. But intentional objects cannot always have all the properties which they are envisioned as having, because, as in the case on the child’s conception of Santa Claus, a nonexistent intentional object may be envisioned as existent, and it is inconsistent to suppose that something could be both existent and nonexistent.

You must resist the temptation to try to resolve such problems by identifying intentional objects with mental objects such as ideas or mental representations. That identification does not work. The child does have an idea of Santa Claus, and Ponce de Leon had an idea of the fountain of youth. But the child does not believe that his idea of Santa Claus lives at the North Pole. Nor was Ponce de Leon looking for a mental representation of the fountain of youth. He already had a mental representation: He was looking for the [intentional] object of that representation.

Is it enough to say that a nonexistent intentional object is a merely possible object ~ is not a completely general account, because some intentional objects are not even possible? Someone may try to find the greatest prime number without realizing that there is no such thing. The intentional object of the attempt ~ the greatest prime number ~ is not a possible object. There is no possible world in which it exists.

One controversy concerning intentionality concerns how to provide a logically adequate account of talk of intentional objects. That is a controversy in philosophical logic and may not be especially important to the rest of cognitive science.

The moral is that, on the other, in which you have to take of nonexistent intentional objects with a grain of salt, without being too serious about the notion that there really are such things. And, yet, you have to be careful not to conclude that the child pondering Santa Claus is not really thinking about anything o that Ponce de Leon was not really looking for anything as he wandered through Florida.

To what extent does cognition involve intentionality? In one view, everything cognitive is intentional: Intentional inexistence is the mark of the mental, according to the German philosopher and psychologist Franz Clemens Brentano (1838-1917), who may be regarded as the foundation of the phenomenological movement in philosophy. His major work was ‘Psychologie vom empirischen Standpunld’ (1864, trans., as ‘Psychology from an Empirical Standpoint’, (1973) which rehabilitates the medieval concentration of the mental as a fundamental aspect, as well, he wrote on theological matter, and on moral philosophy, where the directedness of emotions allows a notion of their correct and incorrect objects, thus permitting him a notion of moral objectivity.

Clearly, many feelings recognized in folk psychology have intentionality and are not simply raw feels. A child hopes that Santa Claus will bring a big red fire truck and fears that Santa Claus will bring a lump of coal instead. The child is happy that Christmas is tomorrow and unhappy that he has not been a good little boy for the past few weeks. A child’s hopes, fears, happiness, and unhappiness have intentional object and intentional content.

It is unclear whether all feelings or emotions have intentional content in this way. Do feelings of ‘free-floating’ anxiety and depression have no intentional content, so that you are not anxious about anything or depressed about anything, but just depressed? Or do such states have a very general nonspecific content, so that you are anxious about things in general or depressed about things in general, just not anxious or depressed about something specific? It is hard to say what turns on the answer to these questions, however.

Perceptual experience has intentionality insomuch as it presents or represents a certain environment. How perceptual experience present’s o represents things may be accurate or inaccurate. Things may or may not be as they seem to be. Sometimes what you see or seem to see does not really exit, as when William Shakspere’s Macbeth hallucinated a bloody dagger.

The intentional content of perceptual experience is sortally perspectival, representing how things are from here, or even representing how things are as perceived from this place. The content of the experience may even be in part about the experience itself: What ids perceived is perhaps seen as causing that very experience.

The dagger is an intentional object of Macbeth’s perceptual experience. That is what he is or seems to be aware of. You may be tempted to think that Macbeth must be aware of a mental image of a dagger, but that is like thinking that Ponce de Leon must have been trying to find an idea of the fountain of youth.

Reconditions amounting to mental imagery have intentionality. What you image or imagine is the intentional object of your imagining or imaging. When you picture Lucy’s smile is what you imagine. Theories of imagery offer accounts of the structure of the inner representation involved in one’s imagery and the processes that operate on the structure. But what you imagine is not that inner mental representation, you imagine Lucy’s smile.

The term ‘mental image’ is ambiguous. Sometimes it refers to the imagining of that thing, picturing Lucy smiling. Sometimes it refers to the hypothetical inner representation formed when something is imagined, an inner mental picture or description of Lucy smiling. It is important not to confuse these things. Otherwise, the substantial claim that imagination involves the construction of inner pictures or the sorts of mental representations with specific structures will be conflated with the obvious fact that you are capable of imaging various things.

Similarly, it is important to distinguish imaging something revolving from actually revolving a mental representation in your mind or head: It is important to distinguish imagining scanning a scene from scanning an inner mental representation.

It is controversial what sort of introspective awareness you have of your inner mental representations. Matters are only confused through failure to distinguish the various senses of mental image. You have something that might be called ‘introspective’ awareness of mental images in the first sense: Namely, the intentional object of your thoughts. You often know what you are thinking about, imagining, perceiving, and so forth. It is unclear whether you have any corresponding access to the mental representations, if any, underlying your thinking, imagining, perceiving, and so forth.

The ascendancy of cognitive approaches to mind has brought with it a renewed interest in imagery. Two problems concerning representation have held centre stage in these discussions, as the first problem, is of a piece with older ontological worries over the status of so-called ‘pictures in the mind’. Proponents of imaginistic theories often talk in ways that seem to presuppose that images are objects, like physical objects, that can be rotated, scanned, approached, enlarged, etc. Yet it is hard to make sense of such reification, given that mental images have no mass, size, shape, or location. The second problem concerning imagery has close ties to debates over the adequacy of the (digital) computer model of mind. The reason for this is that images are typically identified with pictures and thus allied with analogue representation. So it is held that if we employ images in cognition, it shows that claims that all mental representation is propositional or sentential, i.e., digital, is false. In turn, if mental processing involves the use of non-digital, pictorial representations, our minds and cognitive activities cannot be understood within the constraints of the standard computer model. Although seemingly separate mattes, the issue of ontological reification and the issue of ontological reification and the issue for those who assume that analogue representational function via their sharing or having features analogous to those they represent. Most proponents of imaginistic explanation allow that their theories would be unsustainable if they did require that they are literally be items in the mind that possessed spatial dimensions and other physical properties. They have offered various proposals attempting to show how it is possible to cash in on talk, of using or manipulating images without falling into the trap of reification. In any case, it should be clear that questions of reification also pose a problem for proponents of sentential models of mind, who claim that we think in words. For the ontological quandary of giving a satisfactory account of how there can be pictures or maps in the head is at root no different from the problem of how there can be words and sentences in the head. And if a satisfactory answer is available to the latter, it should be adaptable to the former.

A good deal of the debate over imagery has been obscured by problematic accounts of the basis of the ‘stand for’ relation and by unsupported assumptions about the nature, function and distinction between and among linguistic and non-linguistic forms of representation. For example, it is common for both proponents and critics of imagery to identify images with pictures or picture-like items, and then take it for granted that pictorial representation can be explained in terms of resemblance or another notion of 1 ~ 1 correspondence, or assume that since pictures are like their referents they require no interpretation. But it is highly questionable whether such accounts are adequate for dealing with our everyday use of pictures (maps, diagrams, and so forth), in cognition. The difficulties involved with this older understanding of Iconic representation become more acute when applied in imaginistic or mental pictures.

Expanding the representational domain is something problematic in the very way the imagery controversy, along with other debates over mind and cognition have been set up as a choice between whether humans employ one or two kinds of representational systems. As we know that humans make use of an enormous number of different types of [external] representational systems. These systems differ in form and structure along a variety of syntactic, semantical and other dimensions. It would appear there is no sense in which these various and diverse systems can be divided into two well-specified kinds. Nor does it seem possible to reduce, decode, or capture the cognitive content of all of these forms of representation into sentential symbols. Any adequate theory of mind is going to have to deal with the fact that many more than two types of representation are employed in our cognitive activities, then, to assume that yet-to-be discovered modes of internal representation must fit neatly into one or twp pre-ordained categories.

Appeals to representations play a prominent role in contemporary work in the study of mind. With some justification, most attention has been focussed on language or language-like symbol systems. Even when some non-linguistic systems are countenanced, they tend to be given second-class status. This practice, however, has had a rather constricting effect on our understanding of human cognitive activities. It has, for example, resulted in a lack of serious examination of the function of the arts in organizing the reorganizing our world. And the cognitive uses of metaphor, expression. Exemplification, and the like are typically ignored. Moreover, recognizing that a much broader range of representational systems play a number of philosophical presuppositions and doctrines in the study of mind into question: (1) Claims about the unique of representation as the mark of the mental (2) the identification of contentful or informational states with the sentential of propositional attitudes: (3) The idea that all thought can be expressed in language (4) the assumption that compositional accounts of the structure of language provide the only model we have for the exhibits or productive nature of representational systems in general, and (5) The tendency to construe all cognitive transitions among representations as cases of inference (based on syntactic or logical form.)

Though, in having contents, possess semantic properties, and, fundamentally, a central assumption in much current philosophy of mind, is that, propositional attitudes, like beliefs and desires play a causal or explanatory role in mediating between perception and behaviour ~ in terms of reasons ~ we ourselves and each other as ‘rational purposive creatures, fitting our beliefs to the world as we perceive it and seeing to obtain what we desire in the light of them. Reasoning-giving explanation can be offered not only for actions and beliefs, which will gain most attention to this entry: But, also, for desires, intentions, hopes, fears, angers within a network of rationalizing links is part of the individuating characteristics of this range of psychological states and the intentional acts they explain. Even though the reason-giving relation is a normative claim, as such of a reason for believing, acting, and so forth, that if, given to other psychological states, this belief/action is justified or appropriate profoundly of someone’s reason consists in making clear this justificatory link. Paradigmatically, the psychological states that provide an agent with reason and intentional states individuated in terms of their propositional content, are links of the rationalization of this range of psychological states and intentional acts they explain. The associated process of simple ideas we are evermore of an understanding the fundamental aspect attributed to content. This causal-explanatory conception of propositional attitudes, however, casts little light on their representational aspects. The casual-explanatory y role of beliefs and desires depend on how they interact with each other and with subsequent actions. But the representational contents of such states can often involve referential relations to external entities with which thinkers are causally quite unconnected. These referential relations thus seem extraneous to the causal-explanatory roles of mental states. It follows that the causal-explanatory conception of mental states must somehow be amplified or supplemented if it is to account for representational content. Yet, mental events, states or processes with content include seeing the door is shut, believing you are being followed and calculating the square root of two. Saying that, as mental state with content can fail to refer, but there always exist s a specific condition for a state with content to refer to certain things. When the state has a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them.

In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensation to which he is subject, of what is remembered and of what is forgotten, and how reasons beyond the confines of minimal rationality. Even the content involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. Overall, contents are normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. So, that all content is conceptual legitimacy for using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say, that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. That non-conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial type which lack sentence-like structure.

Beliefs are true or false. If, as representationalism had it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Sentences, at least declaratives, are exactly the kind of representation that ave truth values, this in virtue of denoting and attributing. So, if mental representations are as Sententialism says, we could readily account for the truth valuation of mental representations.

Beliefs serve a function within the mental economy. They play a central part in reasoning and, thereby, contribute to the control of behaviour of which has lead into the topic through which elaborative considerations have been defended with that in a number of philosophers and psychologists. The contributive rationalities depict of a set of beliefs, desires, and actions, also perceptions, intentions, and decisions, must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all ~ no rationality, no agent. This core notion of rationality in philosophy of mind thus concern a cluster of personal identity conditions, that is, holistic coherence requirements upon the system of elements comprising a person’s mind. As such, functionalism about content and meaning appears to lead to holism. In general transition between mental stares and between mental states and behaviour depend on the contents of the mental states themselves. If I believe that sharks are dangerous, I will infer from sharks being in the water to the conclusion that people should not be swimming. Suppose I first think that sharks are dangerous, but then change m mind, coming to think that sharks are not dangerous. However, the content that the first belief affirms cannot be the same as the content that the second belief denies, because the transition relations (e.g., the inference from sharks being in the water to what people should do) that constitute the contents changed when I changed my mind. A natural functionalist reply is to say that some transitions are relevant to content individualists have not told us how to do that. Appeal to a traditional analytic/synthetic distinction clearly would do. For example, ‘dog’ and ‘cat’ would have the same content on such a view. It could not be analytic that dogs bark or that cats meow, since we can imagine a non-barking breed of dog and a non-meowing breed of cat. If ‘Dogs are animals’ is analytic, so is ‘Cats are animals’. If ‘Cats are adult kittens’ is analytic, so is ‘Dogs are adult puppies’. Dogs are not cats ~ but then cats are not dogs. So a functionalist account will not find traditional analytic inference relations that will distinguish the meaning of ‘dog’ from the meaning of ‘cat’. Other functionalist acceptation to holism is ‘narrowly contentual’, attempting to accommodate intuitions about the stability of content by appealing wide to content.

While a person’s putative beliefs must mesh with the person’s desire and decisions, or else they cannot qualify as the individuals beliefs: Similarly, for desire, decision and so forth. This is ‘agent-constitutive rationality’ ~ that agent’s posses it is more than an empirical hypothesis. A related conception; to be rational (that is, reasonable, well-founded, not subject to epistemic criticism) a belief or decision at least, must cohere with the rest of the person’s cognitive system ~ for instance, in terms of logical consistency and application of valid inference procedures. Rationality constraints therefore, are key linkages among the cognitive, as distinct from qualitative, mental states.

‘Reason’ capitalizes on various semantic and evidential relations among antecedently held beliefs (and perhaps other attitudes) to generate new beliefs to which subsequent behaviour, might be tuned. Apparently, reasoning is a process that attempts to secure new true beliefs by exploiting old [true] beliefs. By the lights of representationalism, reasoning must be a process defined over mental representations. Sententialism tells us that the type of representation in play in reasoning is most likely sentential ~ even in mental ~ representation.

The sentential theory also seems supported by the argument that the ability to think certain thoughts appears intrinsically connected with the ability to think certain others. For example, the ability to think that John hit’s Mary goes hand in hand with the ability to think that Mary hits John, but not with the ability to think that Toronto is overcrowded. Why is this? The ability to produce or understand certain sentences is intrinsically connected with the ability to produce or understand certain others. For example, there are no native speakers of English who know how to say ‘John hits Mary’ but who do not know how to say ‘Mary hits John’. Similarly, there are no native speakers who understand the former sentence but not the latter. These facts are easily explained if sentences have a syntactic and semantic structure. But if sentences are taken to be atomic, these facts are a complete mystery. What is true for sentences involving manipulating mental representations? If mental representations with a propositional content have a semantic and syntactic structure like that of sentences, it is no accident that one who is able to think that John hit’s Mary is thereby also able to think that Mary hits John. Furthermore, it is no accident that one who can think these thoughts need not thereby be able to think thoughts having different components ~ for example, the thought that Toronto is overcrowded. And what goes here for thought goes for belief and the other propositional attitudes.

A traditional view of philosophical knowledge can be sketched by comparing and contrasting philosophical and scientific investigation, as follows. The two types of investigations differ both in their methods (the former is a priori, and the latter a posteriori)and in the metaphysical status of their results (the former yields facts that are metaphysically necessary and the later yields facts that are metaphysically contingent). Yet the two types of investigations resembled each other in that both, if successful, uncover new facts, and these facts, although expressed in language, are generally not about language (except for investigations I such specialized areas as philosophy of language and empirical linguistics).

This view of philosophical knowledge has considerable appeal. But it faces problems. First, the conclusions of some common philosophical argument seem preposterous. Such positions as that it is no more reasonable to eat bread than arsenic, because it is only in the past that arsenic poisoned people), or that one can never know he is not dreaming, may seem to go so far against common-sense as to be for that unacceptable reason. Second, philosophical investigation does not lead to a consensus among philosophers. Philosophy, unlike the sciences, lacks an established body of generally-agreed-upon truths. Moreover, philosophy lacks an unequivocally applicable method of setting disagreements. (The qualifier ‘unequivocally applicable’ is to forestall the objection that philosophical disagreements are settled by the method of a priori argumentation: There is often unresolvable disagreement about which si de has won a philosophical argument.)

In the face of these and other considerations, various philosophical movements have repudiated the traditional view of philosophical knowledge. Thus, verificationism responds to the unresolvability of traditional philosophical disagreements by putting forth a criterion of literal manfulness: ‘A statement is held to be literally meaningful if and only if it is either analytic or empirically verifiable’ where a statement is analytic if it is just a matter of definition, and tradition controversial philosophical views, such as that it is metaphysically impossible to have knowledge of the world outside one’s own knowledge of the world outside one’s own mind, would count as neither analytic nor empirically verifiable ‘logical positivism’, in the sense of being incapable of truth or falsity, and so not a possible object of cognition. This required a criterion of meaningfulness, and it was found in the idea of empirical verification. Verification or conformation is not necessarily something that can be carried out by the person who entertains the sentence or hypothesis in question, or even by anyone at all at the stage of intellectual and technological development achieved at the time it is entertained. A sentence is cognitively meaningful if and only if it is in principle empirically verifiable or falsifiable.

Anything that does not fulfil this criterion is declared literally meaningless. There is no significant ‘cognitive’ question as to its truth or falsity: It not an appropriate object of enquiry. Moral and aesthetic and other ‘evaluative’ sentences are held to be neither conformable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are at best, expressions of feelings or preference that are neither true nor false. Bu t they did not spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding and perhaps even the improvement of science.

The logical positivist conception of knowledge in its original and purest form sees human knowledge as a complex intellectual structure employed for the successful anticipation of future experience. It requires, on the one hand, a linguistic or conceptual framework in which to express what is to be categorized and predicted and, ion the other, a factual element that provides that abstract form with content. This comes of having that anyone can understand or intelligibly think to be so could go beyond the possibility of human experience, and the only reason anyone could have for believing anything must come, ultimately, from experience.

The general project of the positivist theory of knowledge is to exhibit the structure, content, and basis of human knowledge in accordance with these empiricist principles. Since science is regarded as the repository of all genuine human knowledge, this becomes the task of exhibiting the structure, or as it was called, the ‘logic’ of science. The theory of knowledge, thus become the philosophy of science. It has three major tasks (1) to analyses the meaning o f the statements of science exclusively in terms of observations or experiences in principle available to human beings. (2) To show how certain observations o r experiences serve to confirm a given statement in the sense of making it more warranted or reasonable: (3) To show how non-empirical or a priori knowledge of the necessary truths o f logic and mathematics is possible even though every matter of fact that can be intelligibly thought or known is empirically verifiable or falsifiable.

1. The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification theory of meaningfulness according to which a sentence is cognitively meaningful if and only if it is empirically verifiable. It says in addition what the meaning of each sentence is: It is all those observations that would confirm or disconfirm the sentence. Sentences that would be verified or falsified by all the same observations are empirically equivalent or have the same meaning.

A sentence recording the result of a single observation is an observation or ‘protocol’ sentence. It can be conclusively verified of falsified on a single occasion. Every other meaningful statement is a ‘hypothesis’ which implies an indefinitely large number of observation sentences that together exhaust its meaning, but at no time all of them have been verified or falsified. To give an ‘analysis’ of the statements of scientific statement can be reduced in this way to nothing more than a complex combination of directly verifiable ‘protocol’ sentences.

Any view according to which he condition of a sentence’s or a thought’s being meaningful or intelligibly are equated with the conditions of its being verifiable of falsifiable. An explicit defence of the position would be a defence of the variability principle of meaningfulness. Implicit verificationism is often present in positions or arguments that do not defend that principle in general. But which reject suggestions to the effect that certain sort of claim is unknowable or unconfirmable on the sole ground that it would therefore be meaningless or intelligible is indeed a guarantee of knowability or confirmability is the position sound. If it is, nothing we understand could be unknowable or unconfirmable.

2. The observations recorded in particular ‘protocol’ sentences are said to confirm those ‘hypotheses’ of which they are instances. The task f confirmation theory is therefore to define the notion of a confirming instance of a hypothesis and to show how the occurrence of more such instances adds credibility or warrant to the hypothesis in question. A complete answer would involve a solution of the problem of induction: To explain how any past or present experience makes it reasonable to believe in some thing that has not yet been experienced.



3. Logical and mathematical propositions, and other necessary truths do not predict the course of future sense experience. They cannot be empirically confirmed or disconfirmed. But they are essential to science, and so must be accounted for. They are all ‘analytic’ in something like Kant’s sene: True solely in virtue of the meaning of their constituent terms. They serve only to make explicit the contents of and the logical relations among the terms or concept which make up the conceptual framework through which we interpret and predict experience. Our knowledge of such truths is simply knowledge of what is and what is not contained in the concepts we use.

Experience can perhaps show that a given concept has no instances, or that it is not a useful concept for us to employ. But that would not show that what we understand ti be included in that concept is not really included in it. Or that is not the concept we take it to be. Our knowledge of the constituents of and the relations among our concepts is therefore not dependent on experience: It is a priori. It is knowledge of what holds necessarily, and all necessary truths are ‘analytic’, there is no synthetic a priori knowledge.

The anti-metaphysical empiricism of logical positivism requires that there be no access to any facts beyond sense experience. The appeal to analyticity succeed in accounting for knowledge of necessary truths only if analytic truths state no facts, and our knowledge of them does not require non-sensory awareness of matters of fact. The reduction of all the concepts of arithmetic, for example, to those of logic alone, as was taken to have been achieved in Whitehead and Russell’s ‘Principia Mathematica’, showed that the truths of arithmetic were derived from nothing more than definitions of their constituent terms and general logical laws. Frége would have called them ‘analytic’ for that reason alone. But for a complete account positivism would have also to show that general logical laws state no facts.

Under the influence of their reading of Wittgenstein’s ‘Tractatus Logico Philosophicus’, the positivists regarded all necessary and therefore all analytic truths as ‘tautologies’. They do not state relations holding independently of us within an objective domain of concepts. Their truth is ‘purely formal’: The y are completely ‘empty’ and ‘devoid of factual content’. The y are to be understood as made true solely by our decisions to think and speak in one way than another, as somehow true ‘by convention’. A priori knowledge of them is in this way held to be compatible with there being no non-sensory access to a world of thing s beyond sense experience.

The full criterion of meaningfulness therefore says that a sentence is cognitively meaningful if and only if either it is analytic or it is in principle empirically verifiable or falsifiable.

The interest in logic, however, goes beyond the ability to use it to produce detailed proofs. There are interesting properties that can be proven of logical systems themselves. Many of these proofs of what are called ‘metatheorems’ were developed as part of an endeavour to use logic to provide a foundation to arithmetic. The German mathematician and philosopher of mathematics, Gottlob Frége (1848-1925) whose important work came in the Begriffsschift (‘concept writing’, 1879). Is also the first example of a formal system in the sense of modern logic? In it Frége undertakes to develop a formal system within which mathematical proofs may be given. It was his discovery of the correct representation of generality, the notion of ‘quantifier’ and ‘variable’, the at opened the possibility of successfully achieving this aim. With the at notation Frége could represent sentences involving multiple generality (such as the form ‘for every small number ‘e’ there is a number ‘n’ such that . . . ’) on which the validity of much mathematical reasoning depends. In 1884, Frége published the Grundlagen der Arithmetik (translated as, The Foundaments of Arithmetic, by the British linguistic, philosopher J.L. Austin, 1959). The first volume of the Grundgesetze der Arithimetic (1893, translates, as The Basic Laws of Arithmetic, 1964) and formalized the mathematical approach of the Grundlagen, a task that necessitated giving the first formal theory of classes, it was this theory that was later shown inconsist by Russell’s paradox.

Frége’s distinction as a logician is matched by his deep concern with the basic semantic concepts involved in the logical foundations of his work. In a succession of papers he forges the basic concepts and distinctions that have dominated subsequent philosophical investigation of logic and language. The topics of these writings include sense (Sinn) and reference, negation, assertion, truth/falsity, and the nature of thought. Although Frége’s relation to the philosophical surrounding s of his time are debatable, however, these concerns and his approach to them stamp Frége as the founding figure of ‘analytic philosophy’. Nonetheless, his concern to protect a timeless objectivity for thought and its contents has led to accusations of Platonism, and his own views of the objects of mathematics troubled him until the end of his life.

The program of reducing arithmetic to logic turned out to be impossible, but pursuit of this program resulted in number of important findings. For example, in addition to consistency another important property of a logical system is completeness. A complete system is one in which the axiom structure is sufficient tp allow derivation of all true statements within the particular domain. The German-speaking mathematicians’ logician, Kurt Gödel (1906-78) was to include the proof of the completeness of the first-order predicate calculus, and the ground-breaking results commonly referred to as ‘Gödel’s theorems’, for which his proof that no system can show its own consistency effectively put and end to Hilbert’s programme, as Gödel’s theorem of 1931, which showed that any system strong enough to provide a consistency proof of arithmetic would need to make logical and mathematical assumptions at least as strong as arithmetic itself, and hence be just as much prey to on hidden inconsistencies. Kurt Gödel established that quantificational logic is complete ~ any statement that must be true whenever the premises are true can, in principle, be derived using the standard inference rules of quantificational logic. But the fact that a system is complete does not mean that a procedure exists to generate a proof of any given logical consequences of the premises. If such a procedure exists, the system is decidable. Sentential logic is decidable, and so are some restricted versions of quantificational logic. But Chu proved that general quantificational logic is not decidable. In general quantificational logic, the mere fact that we have failed to derive a result from the postulates does not mean that it could not be derived: It may be that we simply have not yet constructed the right proof. Of even more significant to the program of grounding mathematics in logic was Gödel’s proof that, unlike quantificational logic, there is no consistent axiomatization of arithmetic that is complete. This is referred to as the ‘incompleteness of arithmetic’, and is commonly presented as the claim that for any axiomatization of arithmetic there will be a true statement that cannot be proven within the system.

Some of these theorems about logic have played important roles in the development of computer science. Other claims of logic, which are commonly accepted as true but which are not or cannot be proven, have figured prominently in motivating the use of computers to study cognition. An example is Church’s thesis, which holds that any decidable process is effectively decidable or computable, which is to say that it can be automated. If this thesis is true, then it follows that it is possible to implement a formal system on a computer that will generate the proof of any particular theorem that follows from the postulate. The assumption that this thesis is true has buttressed the use of computers in studies of cognitive premisses. Assuming that cognition rules on decidable procedures, this thesis tells us that these procedures can be implemented on a digital computer as well as in the brain. Many have assumed that the procedures of symbolic logic characterize a greater degree of human reasoning, and because these procedures can readily be implemented on a computer, many investigators have tried to develop simulations of human reasoning using computers equipped with these inference procedures, however, the interest in logic is that numerous philosophers have tried to explicate scientific theories as logical structures and the structures of scientific explanation in terms of formal logical derivation.

According to Francis Herbert Bradley (1846-1924), of which the metaphysical picture to which this leads is one that celebrates unity and wholeness as attuned of real, with anything partial and dependent upon division, in the way that thought is, yet, by contrast, formulated in language is always partial, and dependent upon categories themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorizations. Although absolute idealism maintains few adherents today, Bradley’s dissent from empiricism, his ‘holist’ and the brilliance and style of his writing continue to make him the most interested of the late 19th century writers influenced by the German philosopher Friedrich Wilhelm Georg Hegel (1770-1831). And without a doubt, which Hegel contributed many articles, and wrote his first works the, ‘Phänoomenologie des Geistes’ (1807), and wrote as ‘The phenomenology of Mind, 1977. Again, in 1816 he became professor of philosophy at Heidelberg, where he produced the Enzyklopädie der philosophischen Wissenchaften im Grundrisse (‘Encyclopaedia of the Philosophical Sciences in Outline’) It is, nonetheless, that the cornerstone of Hegel’s system, or world view, is the notion of freedom, conceived not as simple licence to fulfil preferences but as the rare condition of living self-consciously and in a fully rational organization community or state (this is not, as it charged for exampled by Karl Raimund Popper (1902-19940), who in the traditional attempt to found scientific method in the support that experience gives to suitably formed generalizations and theories. Stressing the difficulty, the problem of ‘induction’ puts in front of any such method. Popper substitutes an epistemology that starts with the bold imaginative formation of hypotheses. However, the tribunal of experience, which has the power to falsify them, but not to confirm them. Is that, the theory is capable of being refuted by experience, so that, in the philosophy of science of Popper falsifiablility is the great merit of genuine scientific theory, as opposed to unfalsifiable pseudo-science, notably psychoanalysis and historical materialism? Popper’s idea was that it could be a positive virtue in a scientific theory that it is bold, conjectural and goes beyond the evidence, but that it had to be capable of facing possible refutation. If each and every way things turn out is compatible with the theory, then it is no longer a scientific theory, but, for instance, an ideology or article of faith.

The complex relationship Bradley had with pragmatism, mark a major crux in the history of philosophy. In brief, the philosophy of meaning and truth especially associated with Charles Sanders Peirce (1839-1914) and William James (1842-1910). Pragmatism is given various formulations by both writers, but the core in the belief that the meaning of a doctrine is the same as the practical effects of adopting it. Peirce interpreted a theoretical sentence as a confused form of thought whose meaning is only that of a corresponding practical maxim (telling us what to do in some circumstances). In James the position issues in a theory of truth, notoriously allowing that beliefs, including for example, belief in God, are true if the belief ‘works’ satisfactorily in the widest sense of the word’. On James’s view almost any belief might be respectable, and even true, provided it works (but working is not a simple matter for James). The apparent subjectivist consequences of this were widely assailed by Russell and Moore, and others in the early years of the 20th century. This led to a division within pragmatism between those such as John Dewey (1859-1952), whose humanistic conception of practice, remains inspired by science and the more ‘idealistic’ route taken especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds, and remarks that the hypothesis would not work because it would not satisfy our (i.e., men’s) egoistic cravings for the recognition and admiration of others. The complication that this is what makes it true that other persons have minds is the disturbing part.

Peirce’s own approach to truth is that it is what [suitable] processes of enquiry would tend to accept if pursued to an ideal limit. Modern pragmatist such as McKay Richard Rorty (1931- ) and some writings that Hilary Putnam (1926- ) have usually tried to dispense with an account of truth advocated by a minimal theory of truth, for example, holds that there is no general problem about what make’s sentences or propositions true: A minimal theory of value holds that there is nothing useful to say in general about values and valuing. Minimalism approaches arise when the prospects for a substantial meta-theory about some term seem dim. They are thus consonant with suspicion of ‘first philosophy’, or the possibility of a stand-point over and above involvements in some aspect of our activities, from which those activities can be surveyed and described. Minimalism is frequently associated with the anti-theoretical aspects of the later work of Ludwig Wittgenstein (1889-1951) and has also been charged with being a fig-leaf for philosophical bankruptcy or anorexia.

Originally, a title for those books of Aristotle that came after ‘Physics’, the term is now applied to any enquiry that raises questions about reality that lie beyond or behind those capable of being tackled by the methods of science. Naturally, an immediately contested issue is whether there are any such questions, or whether any text of metaphysics should, in Hume’s words, be ‘committed to the flames, for it can contain nothing but sophistry and illusion’ (Enquiry Concerning Human Understanding). The traditional examples will include questions of ‘mind’ and ‘body’, substance and accident, events, causation, and the categories of things that exist. However, a 17th-century coinage for the branch of metaphysics that concerns itself with what exists. Apart from the ‘ontological’ argument itself there have existed many deductive arguments that the world must continue things of one kind or another, simple things, unexpected things, eternal substances, necessary beings, and so forth. Such arguments often depend on or upon some version of the principle of ‘sufficient reason’, Kant is the greatest opponent of the view that unaided reason can tell us in detail what kinds of things must exist, and therefore do exist. These are the things the variables range over in a properly regimented formal presentation of the theory. Philosophers characteristically charge each other with ‘reifying’ thing improperly, and in the history of philosophy every kind of thing will at one time or another have been thought to be the fictitious result of an ontological mistake.

Metaphysics seeks to determine what are the basic or fundamental kinds of things that exist and to specify the nature of these entities. Historically, interest in metaphysics cantered on such issues as whether a supreme being or a creator god exists. Whether there are mental phenomena or spiritual phenomena that are different from physical phenomena, or whether there is such a thing as free will. In more recent times it has addressed the question of the kinds of entities that we can include in scientific theories. For example, are mental events the kinds of things that should be posited in a theory of human action? The set of entities posited in general said to specify the ontology to which the theory is committed.

It is important to note that the charter of metaphysical questions is generally taken to be different from the character of ordinary empirical questions such as whether there are any living dinosaurs. With such empirical questions we rely on such techniques as ordinary observations to settle the issue. Ontological questions are thought to be more fundamental and no resolvable by ordinary empirical investigations. It was thought that to address the classical questions of the existence of God or of minds separate from bodies required a kind of inquiry that went beyond ordinary empirical investigation. Sometimes it was claimed that such issues could be addressed simply through the tools of logic. For example, the ontological argument for God’s existence tried to argue from the idea of God as a perfect being to the actual existence of God did not exist, there would be a perfect being ~ a being just like God but who actually existed. Thus, the assumption that God does not exist is claimed to be contradictory, so God must exist. The modern ontological questions concern how we should set up the categories through which we conduct our empirical inquiry. The question of the appropriate categories arises before empirical observation and so cannot be easily settled by means of such observation.

To many non-philosophers both classical and contemporary questions of ontology seem peculiarly remote and unproductive. Of what value would it be to have an answer to an ontological question? The very character of ontological questions suggests that they lack practical significance. If ontological differences do not entail physical differences, it would seem that one could hold whatever ontology one wanted and still deal with the physical world in much the same way. When the challenge is put in this way, philosophers often find themselves hard put to provoke a satisfactory answer. A number of philosophers, in fact, have tried to divert attention away from metaphysical issues. The logical positivists, who clam that most classical questions of ontology were meaningless, whereas Ludwig Wittgenstein (1953) tried to convince readers that when philosophers raised such issues they were letting their language go on a holiday, not raising real questions at all.

Other philosophers have sought to reduce the distance between ontological inquires and empirical ones. The most influential American philosopher of the latter half of the 20th century, Orman von Willard Quine (1908-2000), whose early work on mathematical logic, and issued in ‘A System of Logistic’ (1934). ‘Mathematical Logic’ (1940), and ‘Methods of Logic’ (1950). It was with the collection of papers from a ‘Logical Point of View’ (1953) that his philosophical importance became widely recognized. His celebrated attack on the analytic/synthetic distinction heralded a major shift away from the view of language descended from logical positivism, and a new appreciation of the difficulty of providing a sound empirical basis for theses concerning ‘convention’, ‘meaning’, and ‘synonymy’.

His reputation was cemented by ‘Word and Object’ (1960), in which the indeterminacy of radical translation first takes centre stage. In this and many subsequent writings Quine took a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. The languages that are properly behaved and suitable for literal and true description of the world are those of the mathematics and science. The entities to which our best theories refer must be taken with full seriousness in our ontologies, yet an empiricalist. Quine thus supposed that the abstract objects of set theory are required by science, and therefore exist. Quine, for example, proposed that when we settle on a scientific theory we thereby settle the question of what ontological scheme we accept. Invoking the framework of quantificational logic, where all the terms referring to objects can be represented as variable inn quantified expressions, Quine offers the maxim, ‘to be being to be the value of a bound variable’, i.e., the objects to which we attribute properties in our theories are the ones whose existence we accept. Although this attempt to place ontological questions in the context of scientific inquiry may seem particularly attractive when we consider how perplexing the issues are otherwise, we should not think that thereby we really avoid them. What this proposal overlooks is that many of the debates over the adequacy of scientific theories have focussed on the ontology assume by the theory. This has been particularly true in recent psychology, where there have been active disputes over whether to count mental events as causal factors in an explanatory theory. But such questions are not peculiar to psychology. In physics and biology as well, disputes between theories have often turned on ontological issues as much as an empirical issue. For example, there was a long controversy between Cartesians and Newtonians during the 17th and 18th centuries over the legitimacy of appals to action at a distance. Embryology at the end of the last century was torn by a prolonged battle between ‘vitalists’ and ‘mechanists’ over the appropriated kind of explanation for developmental phenomena.

However, it is once, again, felt in consideration of the argument: That ‘If anyone knows some ‘p’, then he or she can be certain that ‘p’. But no one can be certain of anything, and therefore, no one knows anything’. This argument, advanced in this form by Unger, is instructive, it repeats Descartes’ mistake of thinking that the psychological state of feeling certain ~ which someone Can be in with respect to falsehoods, such as the fact that I can feel certain that Northern Dancer will win the Derby next wee k, and be wrong ~ is what we are seeking in epistemology. But it also exemplifies the tendency in discursions of knowledge as such to make the definition of knowledge so highly restrictive little or nothing passes discernable scrutiny. Should one care if a suggested definition of knowledge is such that, as the argument jus t quoted tells us, no one can know anything? Just so long as one has many well-justified beliefs that work well-enough in practice that one not be quite content to know nothing? For my part, some might think it not to bad.

This suggests that insofar as the points sketched, the overall interests, are in connection with the justification of beliefs and not the definition of knowledge that they do so. Justification is an important matter, not; least because in the area of application in epistemology where the really serious interest should lay ~ in question about the ‘philosophy of science’ ~ justification is the crucial problem. That is where epistemologists should be getting down to work. By comparison, efforts to define knowledge’ are trivial and occupy too much effort in epistemology. The disagreeable propensity of the debate generated by Gettier counter-example, as from the American philosopher Edmund Gettier who provided a range of counter-example to this formula, in his case a belief is true, and the agent is justified in believing it. But the justification does not relate to the truth of the belief in the right way, so that it is relatively y accidental, or a matter of luck, that the belief is true. For example, I see what I reasonably and justifiably take to be seeing an event of your receiving a bottle of whiskey and this basis I believe you drink whiskey. The truth is the at you do not drink whiskey, but on this occasion you were in fact taking delivery of a medical specimen. In such a case my belief is true and justified, but I do not thereby know that you drink whiskey, since this truth is only accidental relative to m y evidence. The counter-example, sparked a prolonged debate over the kinds of conditions that might be substituted to give a better account of knowledge, or whether all suggestions would be met similar problems.

The overall problem with justification is that the procedures we adopt, across all walks of epistemic life, appear highly permeable to difficulties posed by scepticism. The problem of justification is therefore a large part the problem of scepticism: Which precisely why discussion of scepticism is most central.

Nonetheless, Russell developed a method of philosophical analysis, the beginning of which are clear in the work of his idealist phase. This method was central to his revolt against idealism and was employed throughout his subsequent career. Its main distinctive feature is that it has two parts. First, it proceeds backwards from a given body of knowledge (the ‘results’) to its premises, and second, it proceeds forwards from the premises to a reconstruction of the original body of knowledge. Russell often referred to the first stage of philosophical analysis simply as ‘analysis’. In contrast to the second stage, which he called ‘synthesis’. While the first stage was seen as being the most philosophic al, both were nonetheless essential to philosophical analysis. Russell consistently adhered to this two-directional view of analysis throughout his career.

Analytic philosophy has never been fixed or stable, because it s intrinsically self-critical and its practitioners are always challenging their own presuppositions and conclusions. However, it is possible to locate a central period in analytic philosophy ~ the period comprising, roughly speaking, the logical positivist immediately priori to the 1939-45 war and the postwar phase of; linguistic analysis. Both the prehistory and the subsequent history of analytic philosophy can be defined by the main doctrines of that central period.

In the central period, analytic philosophy was defined by a belief in two linguistic distinctions, combined with a research programme. The two distinctions are, first, that between analytic and synthetic propositions, and, secondly, that between analytic and synthetic propositions, and, secondly, that between descriptive and evaluative utterances. The research programme is the tradition al philosophical research programme as language knowledge, meaning, truth, mathematics and so forth. One way to see development of analytic philosophy over the past thirty years is to regard it as the gradual rejection of these distinctions, and a corresponding rejection of foundationalism as the crucial enterprise of philosophy. However, in the central period, these two distinctions served not only to identify the main beliefs of analytic philosophy, but, for those who accepted them and the research programme, they defined the nature of philosophy itself.

The distinction between analytic and synthetic prepositions was supposed to be the distinction between those propositions that are true or false as a matter of definition or of the meaning of the terms contained in them (the analytic propositions) and those that are true or false as a matter of fact in the word and not solely in virtue of the meaning of the words (the synthetic propositions) example of analytic truths would be such propositions as ‘Triangles are three-sided plane figures’, ’All bachelors are unmarried’, ‘Women are female’, ‘2 + 2 = 4', and so forth. In each of these, the truth of the proposition is entirely determined by its meaning: They are true by the definitions of the words that they contain. Such propositions can be known to be true or false a priori, and in each case they express necessary truths. Indeed, it was a characteristic feature of the analytic philosophy of this central period that terms such as ‘analytic’, ‘necessary[, and ‘tautological’ were taken to be co-existence. Contrasted with these were synthetic proposition, which, if they were true, were true as a matter of empirical fact and not as a matter of definition alone. Thus, propositions such as ‘There are more women than men’, ‘Bachelors tend to die earlier that married men’ and ‘Bodied attract each other according to the inverse square law’ are all said to be synthetic propositions, , and, if they are true, they express posteriori empirical truths about the real world that are independent of language. Such empirical truths, according to this view, are never necessary rather that they are contingent. For philosophers holding these views, the terms ‘a posteriori’, ‘synthetic’, contingent’, and ‘empirical’ were taken to be more or less co-extensive.

It was a basic assumption behind the logical positivists movement that all meaningful propositions were either analytic or empirical, as defined by the conception that are so. The positivists wished to build a sharp boundary between meaningful propositions of science and everyday life on the one hand, and nonsensical propositions of metaphysics and theology on the other. They claimed that all meaningful propositions are either analytic of synthetic: Discipline s such as logic and mathematics fall within the analytic camp, the empirical sciences and much of common-sensical fall within the synthetic camp. Propositions that were neither analytic nor empirical propositions or meaningless. The slogan of the positivists was called the verification principle ~ and, in a simple form. It can be stated as follows. All meaningful propositions are either analytic or synthetic, and those that are synthetic are empirically verifiable. This slogan was sometimes shortened to an even simpler Verifiability: The meaning of propositions is just its method of verification.

Nevertheless, how can analysis be informative? This in the question that gives rise to what philosophers have traditionally called ‘the’ paradox of analysis. Thus consider the following propositions:

(1) To be an instance of knowledge is to be an instance

of justified true belief not essentially grounded in any

Falsehood.

(1), If true, illustrates an important type of philosophical analysis. For convenience of exposition, and assuming (1) is a correct analysis. The paradox arises from the Fact that if the concept of justified true belief not essentially grounded in, but any falsehood is the ‘analysans’ of the concept of knowledge. It would seem that they are the same concept and hence that:

(2) To be an instance of knowledge is to be an instance

Of knowledge.

Would have to be the same proposition as (1), but then how can (1) be informative when (2) is not? This is what might be the first paradox of analysis.

Classical writing on analysis suggest a second paradox of analysis (Moore, 1942). Consider this:

(3) An analysis of the concept of being a brother is that

To be a brother is to be a male sibling.

If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and that

(4) An analysis of the concept of being a brother is that

To be a brother is to be a brother.

Would also have to be true and in fact would have to be the same proposition as (rather), Yet (3) is true and (4) is false?

Both these paradoxes rest on or upon the assumption that analysis is a relation between concepts, rather than one involving entities of other sorts, such as linguistic expression, and that in a true analysis, analysans and analysandum, are the same concept. Both these assumptions are explicit of Moore’s remarks hint at a solution ~ that a statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says, he thinks a solution of this sort is bound to be right, but fails to suggest that one cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).

Its led in suggestion of such a way as a solution to the second paradox, which is to explicate (3) as:

(5) An analysis is given be saying that the verbal expression

‘x is a brother’ expressed the same concept as is expressed

by the conjunction of the verbal expressions ‘x is a male’

when used to express the concept of being male and

‘x is a sibling’ when used to express the concept of being

A sibling. (Ackerman, 1990)

An important pint about (5) is such of its philosophical jargon (‘analysis’, ‘concept’, ‘x is a . . . ’), (5) seems to state the sort of information generally stated in a definition of the verbal expressions ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw on or upon listeners’ antecedent understanding of the verbal expressions ‘male’ and ‘sibling’, and thus to tell listeners what the verbal expression ‘brother’ really means. Instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, finding the solution to the second paradox seems to make the sort of analysis that gives rise to this paradox a matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meaning of these separate, already-understood verbal expressions are combined, as should both specify the constituent’s concepts of the analysandum and tell how they are combined. But is this all there is to philosophical analysis?

To answer this question, we must note that, in addition to there being two paradoxes of analysis, there are two types of analysis that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysans is intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysans and analysandum identity between analysis does not generates a paradox of analysis and so will not concern us here). One way to recognize the difference between each of the other types of anaplasia is to focus on the difference between the two paradoxes. This can be done by mans of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchanged whenever used in propositional attitude context: If the expressions for the analysans and the analysandum in (1) met this condition. (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expressions for the analysans and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable in sentences involving such contexts s ‘an analysis is given by’. Thus, a solution (such as the one given or offered) that is aimed only at such contexts can solve the second paradox. Tis is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysand and anslysantia raising the first paradox are interchanged. For example, consider the following proposition:

(6) Mary knows that some cats lack tails.

It is possible for John to believe (6) without believing

(7) Mary has justified true belief, not essentially grounded

In any falsehood, that some cats lack tails.

Yet this possibility clearly does not man that the proposition that Mart knows that some casts lack tails is partly about language.

One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2) and concept of justified true believing and essentially grounded in any falsehood is still identical with the concept of knowledge. Another approach is to argue that vin the sort of analysis raising the first paradox, the analysans and analysandum are concepts that are different but that bears a special epistemic relation to each other. Elsewhere, by using developmental approaches and to its finding suggestion, that this analysans-analysandum relation has the following facets:

(I) The analysans and analysandum are necessary

coextensive, i.e., necessarily every instance of one is an

Instance of the other.

(ii) The analysans and analysandum are knowable

A priori to be coextensive.

(iii) the analysandum is simpler than the analysans

(A condition whose necessarily is recognized in classical writings on analysis, such as Langford, 1942).

(iv) The analysans does not have the analysandum

As a constituent.

Condition (iv) rules out circularity, but since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, and it seems best to distinguish between full analysis, for which (iv) is a necessary condition, and partial analysis, for which it is not.

These conditions, while necessary, are clearly insufficient. The basic problem is that they apply to many pairs of concepts that do not seem closely enough related epistemologically to count as analysans and analysandum, such as the concept of being six and the concept of being the fourth root of 1296. Accordingly, its solution finds the fifth condition by drawing on or upon what actually seems epistemologically distinctive about analysis of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counter-example method, which in general terms goes as follows:’J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’) by setting ‘K’ a series or armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of that form ‘If such-and-such were the case, would this count as a case of ‘Q’? ‘J’ then contrasts the description of the cases to which ‘K’ answers affirmatively with the descriptions of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their made of combination that constitute the analysans of K’s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able of enacting this generalization, to recognize its result as correct, or even to understand the analysans that is it correct. This is reminiscent of Walton’s observation that one can simply recognize a bird as a blue-jay without realizing just what features of the bird (beak, wing configuration, and so forth), form the basis of this recognition. (The philosophical significance of this way of recognizing is self-evident, however, ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or, quasi-philosophical, rudimentary notions if he is unsophisticated philosophically) in answers the questions. For this reason, if two hypothetical test cases yield conflicting results, the conflict should be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it to be complete series. Whereby, it might be to say, that a series is complete if and only if no case that is omitted is such that, if included. It would change the analysis arrived at: ‘J’ does not, of course, use as a test-vase description anything complicated and general enough to express the analysans. There is no requirements that the described hypothetical test case be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the question in such a way as to rule out extraneous background assumptions to a degree. Thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of ‘P’, ‘R’, or both reach the analysans of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there wads a ‘P’ that was not an ‘R’. Would you still consider it a case of ‘Q?’

Taking all this into account, the fifth necessary condition for this sort of analysans-analysandum relation is s follows:

(v) If ‘S’ is the analysans of ‘Q’, the proposition that necessarily

all and only instances of ‘S’ are instances of ‘Q’ can be

justified by generalizing from intuitions about the correct

answers to questions about a varied and wide-ranging series

Of simple described hypothetical situations.

Are these five necessary conditions jointly sufficient?

The view that the truth of a proposition consists in its being a member of some suitably defined body of other propositions: A body that is consistent, coherent, and possible endowed with other virtues, provided there are not defined in terms of truth. The theory, though surprising at first sight, has two strengths (1) we test beliefs for truth in the light of other beliefs, to see ho well it is doing in terms of correspondence with the world. To many thinkers the weak point of pure coherence theories is that they fail to include a proper sense of the away in which actual systems of belief are sustained by persons with perceptual experience. For a pure coherent or incoherent set. This seems not to do justice to our sense that experience plays a special role in controlling our systems of belief, but coherentists have contested the claim in various ways.

Aristotle said that a statement was true if it says of what is that it is, and of what is not that it is not (Metaphysics Γ. iv. 1011). But a correspondence theory is not simply the view that truth consists in correspondence with the ‘facts’, but rather the view that it is theocratically interesting to realize this. Aristotle’s claim is a harmless platitude, common to all views of truth. A correspondence theory is distinctive in holding that the notion of correspondence and fact can be sufficiently developed to make the platitude into an interesting theory of truth. Opponents charge that this is not so, primarily because we have no access to facts independently of the statements and beliefs that we hold our beliefs with a reality apprehended by other means than those beliefs, or perhaps, further beliefs. Hence we have no fixed on ‘facts’ as something like structures in which our beliefs may or may not correspond.

Coherence is a major player in the arena of knowledge. There are coherence theories of belief, truth and justification. These combine in various ways to yield theories of knowledge. It only seems reasonably and yet fitting to proceed first, from theories of belief through justification to truth. Coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in this book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that your having some effectively estranging dissimulations of illusory degenerations, made-up in disturbing and perturbative thoughts whirling within your mind, and, yet, it is believed not but only of what is to be in reading your book, but that has not my fault?

One answer is that the belief has a coherent place or role in a system of beliefs. Perception has an influence on belief. You respond to sensory stimuli by believing that you are reading a page in a book than believing that you have invented some differentiated space where you occupy a particular point thereof, in a new and different world of imaginistic latency, and in that world is where our reading is taking place to its actualized concentration on or upon the belief that an influence on action began by some sorted desirous mode of differentiations. You will act differently if you believe that you are reading a page than if you believe of something imaginable of a world totally alienable of it, in that whatsoever has in occurrences to you, it has individuated concurrences with some imaginistic events, as, perhaps, of an imaginistic source so that your presence toward the future is much to be realized. Perception and action undermine the content of belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays in a network of relations to other beliefs, the role in inference and implication, for example. I infer different things from believing that I am reading a page in a book that from any other belief, just as I infer that belief from different things than I infer other beliefs from.

The information of perception and the output of action supplement the central role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief him specific content it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the content that it does because of the way in which it coheres within a system of beliefs. We might distinguish weak coherence theories of content of beliefs from strong coherence theories. Weak coherence theories affirm that coherence is one determinant of the content of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the content of belief.

When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? The answer is the way it coheres with the background system of beliefs. Again there is a distinction between weak and strong theories of coherence. Weak theories tell us that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory and intuition. Strong theories, by contrast, tell us that justification is solely a matter of how a belief coheres with a system of beliefs. There is, however, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theories. A positive coherence theory tells us that if a belief coheres with a background system of beliefs, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to a positive coherence theory, coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification.

A strong coherence theory of justification is a combination of a positive and a negative theory that tells us that a belief is justified if and only if it coheres with a background system of beliefs.

Coherence theories of justification and knowledge have most often been rejected as being unable to deal with perceptual knowledge, and, therefore, it will be most appropriate to consider a perceptual example that will serve as a kind of crucial test. Suppose that a person, call her Julie, works with a scientific instrument that has a gauge for measuring the temperature e of liquid in a container. The gauge is marked in degrees. She looks at the gauge and sees that the reading is 105 degrees. What is she justified in believing and why? Is she, for example, justified in believing g that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system, the belief that the liquid in the container is 105 degrees results from coherence with a background system of beliefs affirming that the shape 105 is a reading of 105 degrees on the gauge that measures the temperature of the liquid in the container. This sort of weak coherence combines coherence with direct perceptual evidence, the foundation of justification, to account for justification of our beliefs.

A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shape 105, or even the more cautious belief that one sees a shape, results from coherence with a background system. One may argue for this strong coherence theory in a number of different ways. One line of argument would be appeal to the coherence theory of the content to the coherence theory of the perception belief results from the relations of the belief to other beliefs in a system of beliefs, then one may argue that the justification of the perceptual belief to other beliefs ion the system. One may, however, argue for the strong coherence theory without assuming the coherence theory of the content to beliefs. It ma y be that some beliefs have the content that they do atomistically but that our justification for believing them is the result of coherence. Consider the vr y cautious belief that I see a shape. How could the justification for that belief be the result of coherence with a background system of beliefs? What might the background system tells ‘us’ that would justify that belief? Our background system contains a simple and primary theory about relationships to the world. To come to the specific point at issue, we believe that we can tell a shape at issue, we believe that we can tell a shape when we see one, that we are trustworthy about such simple matters as whether we see a shape before us or not. We may, with experience, come to believe that sometimes we think we see a shape before us when there is nothing there at all, and so we see an after-imagine, for example, and so we are not perfect, not beyond deception, yet we are trustworthy for the most part. Moreover, when Julie sees the shape 105, she believes that the circumstances are not those that are deceptive about whether she sees that shape. The light is good, and the numeral shapes are large, readily discernable and so forth. These are beliefs that Julie has that tell her that her belief the at see sees a shape is justified. Her belief that she sees a shape is justified because of the way it is supported by the other beliefs. It coheres with those beliefs, and so she is justified.

There are various ways of understanding the nature of this support or coherence. One way ids to view Julie as inferring, that her belief is true from the other beliefs. The inference might be construed as an inference to the best explanation. Given her background beliefs, the best explanation Julie has for the existence of her belief that she sees a shape is the at she does see a shape. Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs. Since e are not aware of such inferences for the most part, the inference might object to such an account on the grounds that all justifying inference is explanatory and, consequently, be led to a more general account of coherence as successful competition based on a background system. The belief that one sees a shape competes with the claim that one is deceived and other sceptical objections. The background system of belief informs one that one is trustworthy and enabling one to meet the objection. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in that way justifies one in the belief. This is a standard strong coherence theory of justification.

It is easy to illustrate the relationship between positive and negative coherence theories in terms of the standard coherence theory. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Julie, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose e that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the cred light has been on, and, after years of working with the gauge. Julie, who has always placed her trust in the gauge, believed what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads, her belief that the liquid in the container is at 105 degrees is not a justified belief because it fails to cohere with her background belief the at the gauge is malfunctioning. Thus, the negative coherence theory tells us that she is not justified in her belief about the temperature e of the contents in the container. By contrast, when the red light is not illuminated and the background system of Julie tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells us that she is justified in her belief because her belief coheres with hr background system.

The foregoing of conventional type in coherence theories of justification have a common feature, namely, that they are what are called internalistic theories of justification. Also, on this, a fundamental similarity to a coherentist view could be internalist, if both the beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible. According to which some of the factors required for justifications must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs and justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism depending on whether actual awareness of the justifying elements or only the capacity to become of them is required, and, in this position, drawing much of a similar coherentist view.

Respectfully, internalist and externalist theories affirming the coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can a completely internal subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?

The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What is required may be put by saying that the justification one has must be undefeated by errors in the background system of belief. A justification is undefeated by errors just in case any correction of such errors sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positive coherence theory, is true belief the at coheres with the background belief system and corrected version of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error. The connection between internal subjective conditions of belief and external objective realities result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julie, she believes that her internal subjective condition of sensory experience an perceptual belief are connected with the external objective reality of the temperature of the liquid in the container in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our belief s that cohere with that system. For such justification to convert to knowledge, that theory must be sufficiently free from error so that coherence is sustained in corrected versions of our background system in corrected versions of the simple background theory, providing the connection between the internal condition and external realities.

The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem. Nonetheless, is that anyone seeking to determine whether she has knowledge is confined so the search for coherence among her beliefs. The sensory experiences she had been mute until they are represented in the form of some perceptual belief. Beliefs are the engine that pulls the train of justification. But what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifact of some deceptive demon or scientist, leads to the quest to reduce truth to some form, as, perhaps, an idealized form, of justification, that would close the threatening sceptical gap between justification and truth. Suppose e that a belief is true if and only if it is ideally justified for some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps, one expressing a consensus among belief systems or some convergence toward consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objection. One is that there is a consensus that we can all be wrong about, at least in some matters. For example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.

Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of coherentism must accept the logical gap between justification and justified belief and truth, but she may believe that her capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.

Mental states have contents: A belief may have the content that I will catch that train, or a hope, that awaits for hope that its hope is hope and would be hope for the wrong thing, and that may have content. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something ~ a particular object, or property, or relation, or other entity.

A concept is that which is understood by a term. Particularly a predicate. To possess a concept is to be able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term ‘idea’ was formerly used in the same way, but is avoided because of its associations with subjective mental imagery, which ,may be irrelevant to the possession of a concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. The distinction in Frége’s philosophy of language, explored in ‘On Concept and Object’ (1892). Frége regarded predicates as incomplete expressions, in the same way as a mathematical expression for a function, such as sine . . . or, log . . ,. Is incomplete? Predicates refer to concepts, which they are ‘unsaturated’, and cannot be referred to by subject expression (we thus get the paradox that the concept of a horse is not a concept) although, Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.



Even so, several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person pronoun, or think of himself as the spouse of Jane Doe, or as the person located in a certain room now. More generally, a concept ‘c’ is distinct from a concept ‘d’ if it is possible for a person rationally to believe ‘c’ is such-and-such, without believing ‘ d’ is such-and-such. As words can be combined to for m structured sentences, concepts have also been conceived as combinable into structural complex contents. When these complex contents are expressed in English by ‘that . . . ’clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.

Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. Nonetheless, we can come to learn that Anthony Blunt, are its historian and Surveyor of the Queen’s Pictures, is a spy, we can come to believe that something falls under the concept while positively disbelieving that the same thing falls under the stereotype associated with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or nor would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.

A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is par t of metaphysics and ontology. Some figures in the history of philosophy ~ and, perhaps, even some of our contemporaries ~ are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought ‘I think’, containing the first-person pronoun way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the objects it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.

A fundamental question for philosophy is: ‘What individuates a given concept’ ~ that is, what makes it the one it is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept and is individuated by this condition: It is the unique concept ‘C’ to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or ‘B’, ACB can be inferred: And from any premise ACB, each of the ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception those judgements that are. A statement that individuates a concept by saying what is required for the thinker to possess it can be described as giving ‘possession conditions’ for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ does not. We can also expect to use relatively observational concepts in specifying the kind of experience that have to be mentioned in the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession. Inn talking of what the thinker finds compelling, the possession condition can also respect an insight of the later Austrian philosopher Ludwig Wittgenstein (1889-1951): That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases ion applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts ‘0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are ‘0' so-and-so’s, there is ‘1' so-and-so: . . . the family consisting of the concept’s belief and desire. Such families have come to be known as, ‘logical holism’. A local ‘holism’ does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. Si one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking, of the concept treated. The possession condition for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various ways make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by the American logician and philosopher Saul Aaron Kripke (1940- ), where on, for any judgement whose content involves a given concept. There is a ‘correctness condition’ for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’: It does not by itself give him good reason for judging ‘Rostropovich is bald’, even if the man he sees is Rostropovich. All these normative connections must explain by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object (or, property, or function, . . .) Which makes the practices of judgement and inference mentioned in the possession condition always lead to true judgements and truth-preserving inferences? This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that makes it the case that he is employing one concept than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a judgement in which the concept is applied to a newly encountered object. The judgement is correct if the new object has the property that in fact makes the judgmental practice mentioned in the possession condition yield true judgements, or truth-preserving inference.

A definition that proceeds by ostension, or in other words by simply showing what in intended, as one might ostensively define a shade such as blue, or the taste of a pineapple, by actually exhibiting an example. It relies on the hearer’s uptake in understanding which feature is intended, and how broadly the example may be taken. A direct ostension is a showing of the object or feature intended, while in deferred ostension one shows one thing in order to direct attention to another, e.g., when showing a photograph to indicate a person, or a thermometer to indicate the temperature.

An ostensive definition is an explanation of the meaning of a word typically involving three elements: (1) An ostensive gesture (2) an object pointed at which functions as a sample, and (3) the utterance ‘This is (a) ‘W’. Like other forms of explanation of word-meaning, an ostensive definition functions as a rule or standard of correctness for the application of a word. The utterance ‘This is ‘W’ when employed in giving an ostensive definition does not describe an object (i.e., the thing pointed at) as having the property ‘W’, but defines a word. It is most illuminatingly viewed as providing a kind of substitution-rule in accord with which one symbol, e.g., ‘red’, is replaced by a complex symbol consisting of utterances (‘This’ or ‘This colour’), gesture, and sample. Hence instead of ‘The curtains are red’ one can say ‘The curtains are this ↗ is correctly characterized as being ‘W’.

Like all definitions, ostensive definitions are misinterpreted. One way of warding off misunderstanding is to specify the ‘grammatical signpost’ by which the definiendum is stationed, i.e., to give the logico-grammatical category to which it belongs, viz. ‘This ‘C’ is ‘W’, where ‘C’ is a place-holder for, e.g., ‘colour’, ‘length’, ‘shape’, ‘weight’. Like all rules, an ostensive definition does not provide its own method of application. Understanding an ostensive definition involves grasping the ‘method of projection’ from the sample to what it represent or from the ostensive gesture accompanying the definition to the application of the word. Thus, in the case of defining a length by reference to a measuring rod, one must grasp the method of laying the measuring rod alongside objects to determine their length before one can be said to grasp the use of the definiendum. Ostensive definitions fulfil a crucial role both in explaining word meaning and in justifying or criticizing the application to that word, (e.g., ‘Those curtains are not ultramarine ~ this ↗ colour is ultramarine [pointing at a colour chart] and the curtains are not this colour). An ostensive definition does not give evidential grounds for the application of a word ‘W’, but rather specifies what counts as being ‘W’.

The boundaries of the notion of ostensive definition are vague. A definition of a smell, taste or sound by reference to a sample typically involves no deictic gesture but a presentation of a sample (by striking a keyboard, for example). Conversely, defining directions (for example. ‘North’) by a deictic gesture involves no sample. Nor is the form of words ‘This is (a) ‘W’ essential. ‘This is called ‘W’ or ‘W is this C’ can fulfil the same role,

Whether something functions as a sample (or, paradigm) for the correct application of a word is not a matter of its essential nature, but of human choice and convention. Being a sample is a role conferred upon an object momentarily, temporarily or relatively permanently by us ~ it is a use to which we put the object. Thus, we can use the curtains now to explain what ‘ultimarine’ means ~ but, perhaps, never again, although we may often characterize (describe) them as being ultramarine. Or we can use a standard colour chat to explain what ‘ultramarine’ means. Although if it is left in the sun and fades, it will no longer be so used. Or we may establish relatively permanent canonical samples, as ‘was’ the case with the Standard Metre bar. A sample represents that of which it is a sample, and hence must be typical of its kind. It can characteristically be copied or reproduce and has associated with it a method of comparison. It is noteworthy that one and the same object may function now as a sample in an explanation of meaning or evaluation of correct application and now as an item described as having the defined property. But these roles are exclusive in as much as what functions as a norm for description cannot simultaneously be described as falling under the norm. Qua sample the object belonging to the means of representation and is properly conceived as belonging to grammar in an extended sense of the term. Therefore, the Standard Metre bar cannot be said to be (or not to be) one metre long. Furthermore, one and the same for more than one expression. Thus, a black patch on a colour chart may serve both to explain what ‘black’ means and as part of an explanation of what ‘darker than’ means.

Although the expression ‘ostensive definition’ is modern philosophical jargon (W E. Johnson, ‘Logic’, 1921) the idea of ostensive definition is venerable. It is a fundamental constituent of what Wittgenstein called ‘Augustine’s picture of language’ in which it is conceived as the fundamental mechanism whereby language is ‘connected with reality’. The mainstream philosophical tradition has represented language as having a hierarchical structure, its expressions being either ‘definables’ or ‘indefinables’, the former constituting a network of lexically definable terms, the latter of simple, unanalyzable expressions that link language with reality and that inject ‘content’ into the network. Ostensive definition thus constitute the ‘foundations’ of language and the terminal point of philosophical analysis, correlating primitive terms with entities that are their meanings. On this conception, ostensive definition is privileged: It is final and unambiguous, setting all aspects of word use ~ the grammar of the definiendum being conceived to flow from the nature of the entity with which the indefinable expression is associated. In classical empiricism definables stand for complex ideas, indefinables for simple ideas that are ‘given’ is mental in nature, the linking mechanism is private ‘mental’ ostensive definition, and the basic samples, stored in the mind, are ideas that are essentially epistemically private and unshareable.

Wittgenstein, who wrote more extensively on ostensive definition than any other philosophers, held this picture of language to be profoundly misleading. Far from samples being ‘entities in reality’ to which indefinables are linked by ostensive definition, they themselves belong to the means of representation. In that sense, there is no ‘link between language and reality’, for explanations of meaning, including ostensive definitions are not privileged but are as misinterpretable as any other form of explanation. The object pointed at being not ‘simpler’, which constitute the ultimate metaphysical constituents of reality, but samples with a distinctive use in our language-games. They are not the meanings of words, but instruments of our means of representation. The grammar of a word ostensively defined does not flow from the essential nature of the object pointed at, but is constituted by all the rules for the use of the word, of which ostensive definition is but one. It is a confusion of supposing that expressions must be explained exclusively either by analytic definition (definables) or by ostension (indefinables), for many expressions can be explained in both ways, and there are many other licit forms of explanation of meaning. The idea of ‘private’ or ‘mental’ ostensive definition is wholly misconceived, for there can be no such thing as a rule for the use of a word that cannot logically be understood or followed by more than one person, there can be no such thing as a logically private sample nor any such thing as a mental sample.

Apart from these negative lessons, a correct conception of ostensive definition by reference to samples resolves the venerable puzzles of the alleged synthetic priorities of colour exclusion (e.g., that nothing can be simultaneously red and green all over) apparently metaphysical propositions as ‘black is darker than white’. Such ‘necessary truths’ are indeed not derivable from explicit definitions and the laws of logic alone (i.e., are not analytic) but nor are they descriptions of the essential natures of objects in reality. They are rules for the use of colour words, exhibited in our practices of explaining and applying words defined by reference to samples. What we employ as a sample of red we do not also employ as a sample of green: And a sample of black can, in conjunction with a sample of white, also be used to explain what ‘darker than’ mans. What appears to be metaphysical propositions about essential natures are but the shadows cast by grammar?

A description of a (putative) object as the single, unique, bearing of a property: ‘The smallest positive number’, ‘the first dog born at sea’,’the richest person in the world’, in the theory of definite descriptions, unveiled in the paper ‘On Denoting’ (Mind, 1905) Russell analysed sentences of the form ‘the ‘F’ is ‘G’, as asserting that there is an ‘F’ that there are no two distinct F’s, and that if anything is ‘F’ then it is ‘G’. A legitimate definition of something as the ‘F’ will therefore depend on there being one and not more than one ‘F’. To say that the ‘F’ does not exist is not to say, paradoxically, of something that exists that it does not, but to say that either nothing is ‘F’, or more tan one thing is. Russell found the theory of enormous importance, since it shows how we can understand propositions involving the us of empty terms (terms that do not refer to anything or describe anything) without supposing that there is a mysterious or surrogate object that they have as their reference. So, for example, it becomes no argument for the existence of God that we understand claims in which the term occurs. Analysing the term as a description, we may interpret the claim that God exists as something like ‘there a unique omnipotent, personal creator of the universe’, and this is intelligible whether or not it is true.

Formally the theory of descriptions can be couched in the two definitions:



The F is G = (∃x)(Fx &(∀y)(Fy ➞ y = x)) & Gx)

The F exists = (∃x)(Fx & (∀y)(Fy ➞ y = x))



In the most fundamental scientific sense to define is to delimit. Thus, definitions serve to fix boundaries of phenomena or the range of applicability of terms or concepts. That whose range is to be delimited is called the ‘definiendum’, and that which delimits the ‘definiens’. In practice the hard sciences tend to be more concerned with delimiting phenomena, and definitions are frequently informal, given on the fly, as in ‘Therefore, a layer of high rock strength, called the ‘lithosphere;, exists near the surface of planets’. Social science practice tends to focus on specifying application of concepts through formal operational definitions. Philosophical discussions have concentrated almost exclusively on articulating definitional forms for terms.

Definitions are full if the definiens completely delimits the definidum, and partial if it only brackets or circumscribes it. Explicit definitions are full definitions where the definidum and the definiens are asserted to be equivalent. Examples are coined terms and stimulative definitions such as ‘For the purpose of this study the lithosphere will be taken as the upper 100 km’s f hard rock in the Earth’s crust’. Theories or models that are so rich in structure that sub-portions are functionally equivalent to explicit definitions are hard to provide implicit definitions. In formal context our basic understanding of full definitions, including relations between explicit and implicit definitions, is provided by the Beth definability theorem, nonetheless, partial definitions are illustrated by reduction sentences such as:

When in circumstances ’C’, definiendum ‘D’ applies if situation ‘S’ obtains, which says nothing about the applicability of ‘D’ outside ‘C?’

It is commonly supposed that definitions are analytic specifications of meaning. In some cases, such as stimulative definitions, this may be so. But some philosophers, e.g., the German logical positivist Rudolf Carnap (1891-1970), combining a basic empiricism with the logical tools provided by Frége and Russell, and it is his work that the main achievements (and difficulties) of logical positivism are best exhibited his first major work, was Der logische Aufbau der Welt (1928, trs. as, The Logical Structure of the World, 1967). This is the solipsisytic basis of the construction of the external world, although Carnap later resisted the apparent metaphysical priority here given to experience. Carnap pursued the enterprise of clarifying the structures of mathematics and scientific language (the only legitimate task for scientific philosophy) in Logische Syntax der Sprache (1934, trs. as, The Logical Syntax of Language 1937). Refinements to his syntactic and semantic views continued with Meaning and Necessity (1947). While a generally loosening of the original ideal of reduction culminated in the great Logical Foundations of Probability, the most important single work of confirmation theory, in 1950. Other works concern the structure of physics and the concept of entropy.

Reduction sentences are often descriptions of measurement apparatus specifying empirical correlations between detector output reading of meaning. The larger point here is that specification of meanings is only one of many possible means for delimiting the definiendum. Specification of meaning seems tangential to the bulk of scientific definitional practices.

Definitions are said to be creative, if their addition to a theory expands its content, and non-creative, if they do not. More generally, we can say that definitions are creative whenever the definiens assert contingent relations involving the definiendum. Thus, definitions providing analytic specifications of meaning are non-creative. Most explicit definitions are non-creative, and hence eliminable from theories without loss of empirical content. One could relativize the distinction so that definitions redundant of accepted theory or background belief in the scientific context are counted as non-creative. Either way, most other scientific expressions of empirical correlation. Thus, for purposes of philosophical analysis, suppositions that definitions either are non-creative or meaning specifications demand explicit justification. Much of the literature concerning incommensurability and meaning change in science turns on uncritical acceptance of such suppositions.

Many philosophers have been concerned with admissible definitional forms. Some require real definitions ~ a form of explicit definition in which the definiens equates the definiendum with an essence specified as a conjunction A1 ∧ . . . ∧ An of attributes. (By contrast, normal definitions use non-essential attributes.) The Aristotelian definitional form further requires that real definitions be hierarchical, where the species of a genus share A1 . . . An ~ 1, being differentiated only by the remaining essential attributes An. Such definitional forms are inadequate for evolving biological species whose essence may vary. Disjunctive polytypic definitions allow changing essences by equating the definiendum with a finite number of conjunctive essences. But future evolution may produce further new essences, so partially specify potentially infinite disjunctive polytypic definitions were proposed. Such ‘explicit definitions’ fail to delimit the species, since they are incomplete. A superior alternative is to formulate reduction sentences for each essence encountered, which partially define the species but allow the addition of new reduction sentences for subsequent evolved essences.

Ludwig Wittgenstein (1953) claimed that many natural kinds’ lack conjunctive essences. Their members stand only in a family resemblance to each other. Philosophers of science have developed the idea in two ways. Achinstein (1968) retorted to cluster analysis, arguing that most scientific definitions (e.g., of gold) specify non-essential attributes of which a ‘goodly number’ must be present for the definiendum to apply. Suppe (1989) argued that natural kinds were constituted by a single kind-making attributes (e.g., being gold), and that which patterns of correlation might obtain between the kind-making attribute and other diagnostic characteristics is a factual matter. Thus, issues of appropriate definitional form (e.g., explicit, polytypic, or cluster) are empirical, not philosophical questions.

Definitions of concepts are closely related to explications, where imprecise concepts (explicanda) are replaced by more precise ones (explicasta). The explicandum and explicatum are never equivalent. In an adequate explication the explicatum will accommodate all clear-cut instances of the explicandum and exclude all clear-cut non-instances. The explicatum decides what to do with cases where application of the explicandum is problematic. Explications are neither real nor nominal definitions and are generally creative. In many scientific cases, definitions function more as explications than as meaning specifications or real definitions.

Imagination most directly is the faculty of reviving or especially creating images in the mind’s eye. But more generally, the ability to create and rehearse possible situations, to combine knowledge in unusual ways, or to invent thought experiments. The English poet Samuel Taylor Coleridge (1772-1834) was the first aesthetic theorist to distinguish the possibility of disciplined, creative use of the imagination, as opposed to the idle play of fancy imagination is involved in any flexible rehearsal of different approaches to a problem and is wrongly thought of as opposed reasoning. It also bears an interesting relation to the process of deciding whether a projected scenario is genuinely possible. We seem able to imagine ourselves having been someone other than were supposed to be or otherwise elsewhere than were being supposed to be. And unable to imagine space being spherical, tet further reflection may lead us to think that the first supposition is impossible and the second entirely possible.

It is probably true that philosophers have shown much less interest in the subject of the imagination during the last fifteen years or so than in the period just before that. It is certainly true that more books about the imagination have been written by those concerned with literature and the arts than have been written by philosophers in general and by those concerned with the philosophy of mind in particular. This is understandable in that the imagination and imaginativeness figure prominently in artistic processes, especially in romantic art. Indeed, those two high priests of romanticism, Wordsworth and Coleridge, made large claims for the role played by the imagination in views of reality, although Coleridge’s thinking on this was influenced by his reading of the German philosophy of the late eighteenth and early nineteenth centuries, particularly Kant and Schelling. Coleridge distinguished between primary and secondary imagination, both of them in some sense productive, as opposed to merely reproductive. Primary imagination is involved in all perception of the world in accordance with a theory. Coleridge derived from Kant, while secondary imagination, the poetic imagination is creative from the materials that perception provides. It is this poetic imagination which exemplifies imagination nativeness in the most obvious way.

Being imaginative is a function of thought, but to use one’s imagination in this way is not just a matter of thinking in novel ways. Someone who, like Einstein for example, presents a new way of thinking about the world need not be by reason of this supremely imaginative (though of course he may be). The use of new concepts or a new way of using already existing concepts is not in themselves an exemplification of the imagination. What seems crucial to the imagination is that it involves a series of perspectives, new ways of seeing things, in a sense of ‘seeing’ that need not be literal. It thus involves, whether directly or indirectly. Some connection with perception, but in different ways, some of which will become evident later. The aim of subsequent discussion here will indeed be to make clear the similarities and differences between seeing proper and seeing with the mind’s eye, as it is sometimes put. This will involve some consideration of the nature and role of images.

Connections between the imagination and perception are evident in the ways that many classical philosophers have dealt with the imagination. One of the earliest examples of this, the treatment of pantasia (usually translated as ‘imagination’) in Aristotle’s De Amima III.3, seems to regard the imagination as a sort of half-way house between perception and thought, but in a way which makes it cover appearances in a way which makes it cover appearances in general, so the at the chapter in question has as much to do with perceptual appearances, including illusions, as it has to do with, say, imagery. Yet Aristotle al so emphasizes that imagining is in some sense voluntary, and the at when we imagine a terrifying scene we are not necessarily terrified, and more than we need be when e see terrible things in a picture. How that fits in within the idea that an illusion is or can be a function of the imagination is less than clear. Yet some subsequent philosophers, Kant in particular, followed in recent times by thee English philosopher Peter Frederick Strawson (1919- ), whereon, his early work concerned logic and language, very much in the spirit of the general tradition of ordinary language philosophy of the time. In 1958 his ‘Individuals’ marked a return to wider metaphysical concerns, and his reconciliation was consolidated, by ‘The Bounds of Sense’ (1966) which is a magnificent tour of the metaphysics of Kant, and naturalist papers on epistemology, freedom, naturalism and scepticism. Both Kant and Strawson have maintained that all perception involves the imagination, in some sense of that term, in that some bridge is required between abstract thoughts and their perceptual instance. This comes out in Kant’s treatment of what he calls the ‘schematism’, where he rightly argues that someone might have an abstract understanding of the concept of a dog without being able to recognize or identify any dogs. It is also clear that someone might be able to classify all dogs together without any understanding of what a dog is. The bridge that needs to be provided to link these two abilities as for Kant attributes to the imagination.

In so arguing, Kant carries on farther than Hume, for he thought of the imagination in two connected ways. First, there is the fact that there exist, Hume thinks, ideas which are either copies of impressions provided by the senses or derived from these. Ideas of imagination are distinguished from those of memory, and both of these from impressions of sense, by their lesser vivacity. Second, the imagination is involved in the processes, mainly associated of ideas, which take one from one idea to another, and take one from one idea to another, and which Hume uses to explain, for example, our tendency to think of objects as having a continuing existence, even when we have no impressions of them. Ideas, one might suggest, are for Hume more or less images, and imagination in the second, wider, is since the mental process which takes one from one idea to another and thereby go beyond what the senses immediately justify. The role which Kant gives to the imagination in relation to perception in general is obviously a wider and fundamental role than Hume allows. Indeed one might take Kant to be saying that were there not the role that he, Kant, insists on there would be no place for the role which Hume gives it. Kant also allows for a free use of the imagination in connection with the arts and the perception of beauty, and this is more a specific role than that involved in perception in general.

Philosophical issues about perception tend to be issues specifically about sense-perception. In English (and the same is true of comparable terms in many other languages) the term ‘perception’ has a wider connotation than anything that has to do with the senses and the sense-organs, though it generally involves the idea of what may imply, if only in a metaphorical sense a point of view. Thus, it is now increasingly common for news-commentators, for example, to speak of people’s perception of a certain set of events, even though those people have not been witnesses of them. In one sense, however, there is nothing new about this: In seventeenth and eighteenth-century philosophical usage, words for perception were used with much wider coverage than sense-perception alone. It is, however, sense-perception that has typically raised the largest and most obvious philosophical problems.

Such problems may be said to fall into two categories. There are, first, the epistemological problems about the role of sense-perception in connection with the acquisition and possession of knowledge of the world around us. These problems ~ does perception give us knowledge of the so-called ‘external world’? How and to what extent? ~ have become dominant in epistemology since Descartes because of his invocation of the method of doubt, although they undoubtedly existed in philosophers’ minds in one way or another before that. Anglo-Saxon philosophy y such problems centre on the question whether there are firm data provided by the senses ~ so-called sense-data ~ and if so what is the relation of such sense-data to

so-called material problems for the philosophy of mind, although certain answers undoubtedly belong to the philosophy of mind can certainly add to epistemological difficulties. If perception is assimilated, for example, to sensation there is an obvious temptation to think that in perception we are restricted, at any rate, initially to the contents of our own minds.

The second category of problems about perception ~ those that fall directly under the heading of the philosophy of mind ~ are thus, in a sense priori to the problems that exercised many empiricists in the first half of this century. They are problems about how perception is to be construed and how it relates to a number of other aspects of the mind’s functioning ~ sensation, concepts of other things involved in our understanding of things, belief and judgement, the imagination, our action in relation to the world around us, and causal processes involved in the physics, biology and psychology of perception. Some of the latter were central to the considerations that Aristotle raised about in his ‘De Anima’.

It is obvious enough that sense-perception involves some kind of stimulation of sense-organs -by stimuli that are themselves the product of physical processes, and that subsequent processes which are biological in character are then initiated. Moreover, only if the organism in which this takes place is adapted to such stimulation can perception ensure. Aristotle had something to say about such matters, but it was evident to him that such an account was insufficient t to explain what perception it is. It might be thought that the most obvious thing that is missing in such an account is some reference to consciousness. But while it may be the case that perception can take place only in creatures that have consciousness in some cases, it is not clear that every case of perception directly involves consciousness. There is such a thing as unconsciousness, whereby perception as well among psychologist s have recently drawn attention to the phenomenon which is described as ‘blindness’, ~ an ability, generally manifested in patients with certain kinds of brain-disjunctions, to discriminate, sources of light that when people concerned have no consciousness of the lights and think themselves of guessing about them. It is important, then, not to confuse the plausible claim that perception can take place only in conscious beings with the less plausible claim that perception always involves consciousness of objects. A similar point may apply to the relation of perception to another perception-possession content.

It gives reasonable cause, to assume that our own consciousness seems to be the fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like yours? Is ours like that of animals? Might machines come to have consciousness? Whatever complex biological and neural processes go on backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the ‘I’, or ‘self’ that is the spectator, or at any rate the owner of this theatre? These problems together make up what is sometimes called the ‘hard problem’ of consciousness. One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones, as Leibniz remarked, that if we could construct a machine that could think and feel thus, be able to examine its working parts as thoroughly as we pleased, we would still not find any tracings found of consciousness, and draw the conclusion that consciousness resides in simple subjects, not complex ones. Even if we are convinced that consciousness somehow emerges from the overflowing emptiness that of brain functioning, we may still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.

The nature of conscious experience has been the largest single obstacle to ‘physicalism’, ‘behaviouralism’, and ‘functionalism’ in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesia. But many philosophers are convinced that we can divide and conquer: We may make progress not by thinking of one ‘hard’ problem, but by breaking the subject up into different skills and recognizing that we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, and above all no inner spectator.

Historically, it has been most common to assimilate perception to sensation on the one hand, and judgement on the other. The temptation to assimilate it to sensation aries from the fact that perception involves the stimulation of an organ and seems to that extent passive in nature. The temptation to assimilate it to judgement arises from the fact that e can be said to perceive not just objects but that certain things hold good of them, so that the findings, so to speak, of perception may have a propositional character. But to have a sensation, such as that of a pain, by no means entails perceiving anything or indeed having awareness of anything apart from itself. Moreover, while in looking out of the window we may perceive (see) that the sun is shining, this may involve no explicit judgement on our part, even it gives rise to a belief, and part, it sometimes gives to knowledge. (Indeed, if ‘see that’ is taken literally, seeing-that always implies knowledge: To see that something is the case is already to apprehend, and thus know, that it is so.)

The point about sensation was made admirably clear by The Scottish philosopher and common-sense of Thomas Reid (1710-96), in his own approach, sensations of primary qualifies of objects speak to us like words, affording us ‘natural signs’ of the qualities of things. The mind passes naturally and above every word to consider what it signifiers, and in like manner directly the qualities they signify. This is so for ‘original perceptions’ of primary qualities, as perceptions of secondary qualities have to be acquired, Reids insight has been recaptured in the 20th century in various kinds of direct ‘realism’ it enables him to defend the basic conceptual scheme of common-sense against what he saw as the corrosive scepticism of Hume. For Thomas Reid, as for George Moore later, the basic principles of common-sense cannot be avoided or abandoned, although f we raise the question of their truth we can only appeal to divine harmony (he may not have been so far from Hume as he supposed), Reid’s influence persisted in the Scottish school of common-sense philosophy, and his phenomenological insights continue to attract modern attention.

In his ‘Essays 1 and 2', Reid said that sensation involved an act of mind ‘that hath no object distinct from the act itself’. Perception, by contrast, involved according to Reid, a ‘conception or notion of the object perceived’, and a ‘strong and irresistible conviction and belief of its present existence’, which, moreover, are ‘immediate, and not the effect of reasoning’. Reid also thought that perceptions were generally accompanied by sensations and offered a complex account of the relations between the two. Whether all this is correct in every detail need not worry us present, although it is fairly clear that perceiving need not be believing. Certain illusions, such as the Müller-Lyer illusion, are such that we may see them in a certain way, no matter what our beliefs may be about them. Once, again, it is arguable that such misperceptions could only take place in believers, whether or not belief about the object in question occur in the actual perception.

Similar considerations apply to concept-possession. It is certainly not the case that in order to perceive a cyclotron, and I must have the (or, a) concept of a cyclotron: I may have no idea of what I am perceiving, except of course, that it is something. But to be something it must have some distinguishable characteristics and must stand in some elation to other objects, including whatever it is that constitutes the background against which it is perceived. In order to perceive it I must therefore have some understanding of the world in which such objects are to be found. That will, in the case of most if not all our senses, be a spatial world in which things persist or change over time. Hence, perception of objects presupposes forms of awareness that are spatiotemporal. It is at least arguable that frameworks would not be available were we not active creatures who are capable of moving about in the world in which we live breath and love. Once, again, it is not that every perception involves some activity on our part, although some may do so, but that perception can take place only in active creatures, and is to that extent, if only that extent, no t a purely passive process.

It must be evident in all this how fa we are getting from the idea that perception is simply a matter of the stimulation of our sense-organ: It may be replied that it has long been clear that there must be some interaction between what is brought about by stimulation of sense-organs and subsequent neural: Including, however, does not end the problem, since we are now left with the question of the relation among all that and the story about sensations, beliefs, concepts and activity that some of the issues are in part, of the general mind-body problem, but there is also the more specific problem of how these ‘mental’ items are construed in such a way as to have any kind of relation to what are apparently the purely passive casual processes involved in and set up by the stimulation of sense-organs.

One idea that has in recent times been thought by many philosophers and psychologists alike is the idea that perception can be thought of as a species of information-processing, in which the stimulation of the sense-organs constitutes an information to subsequent processing, presumably of a computational form. The psychologist J.J. Gibson suggested that the senses should be construed as systems the function of which is to derive information from the stimulus-array that to ‘hunt for’ such information (Gibson, 1966). He thought, however, that it was enough for a satisfactory psychological theory of perception, that his account should be restricted to the details of such information pick-up, without reference to other ‘inner’ processes such as concept-use. Although Gibson has been very influential in turning psychology away from the previously dominant sensation-based framework of ideas (of which gestalt psychology was really a special case), his claim that reliance on his notion of information is enough has seemed incredible to many. Moreover, his notion of ‘information’ is sufficiently close to the ordinary one to warrant the accusation that it presupposes the very ideas of, for example, concept-possession and belief that he claimed to exclude, the idea of information espoused by him (though it has to be said that this claim has been disputed) is that of ‘information about’, not the technical one involved in information theory or that presupposed by the theory of computation.

The most influential psychological theory of perception has in consequence been that David Marr, who has explicitly adopted the ‘computational metaphor’ in a fairly literal way. He distinguished three levels of analysis: (1) The description of the abstract computational theory involved. (2) The account of the implementation of that theory in terms of its appropriate logarithm, and (3) the account of the physical realization of the theory and the senses. All this is based on the idea that the senses when simulates provides representations on which the computational processes can work. Other theories have offered analogous accounts, if differing in detail. Perhaps the most crucial idea in all this is the one about representations. There is, perhaps, a sense in which what happens at, say, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what procedure e that stimulation, and thus some kind of representation of the objects of perception. Or so, it may seem, if one attempts to describe the relation between the structure and character and nature of the retinal processes. One might indeed, say that the nature of that relation is such as to provide information about the part of the worked perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s trunk provide the rings for providing information of its age. This is because there is an appropriate causal relation between the two things, which makes it impossible for it to be a matter of chance. Subsequently, processing can then be thought to be one carried out on what is provided in the representations in question.

One needs to be careful, however, if there are such representations, they are not representations for the perceiver, that, indeed, it is the thought that perception involves representations for the perceiver. Indeed, it is the thoughts of that kind which produced the old, and now largely discredited, philosophical theories of perception which suggested that perception be a matter, primarily, of an apprehension of mental states of some kind (e.g., sense-data) which are representatives of perceptual objects, either by being caused by them, or in being in some way constitutive of them. Also, if it be said that the idea of information so invoked indicates that there is a sense in which the processes of stimulation can be said to have content, but a non-conceptual content. Distinct from the content provided by the subsumption of what is perceived under concepts, it must be emphasized that, that concept is not one for the perceiver. What the information ~ processing story provides is, at best, a more adequate categorization than previously available of the causal processes involved. That may be important but more should not be claimed for it than there is. If in perception in a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is only because there is presupposed in the perception the possession of concepts of objects, and more particularly, a concept of space and how objects occupy space.

Perception is always concept-dependent at least in the sense that perceivers must be concept possessors and users, and almost certainly in the sense that perception entails concept-use in its application to objects. It is at least, arguable that these organisms that react in a biologically useful way to something but that are such that the attribution of concepts to them is implausible, should not be said to perceive those objects, however, much the objects figure causally un their behaviour. Moreover, in spite of what was said earlier about unconscious perception, and blind sight, perception normally involves consciousness of objects. Moreover, that consciousness presents the objects in such a way that the experience has a certain phenomenal character, which derived from the sensations which are casual processes involved set up. This is most evident in the case of touch (which being a ‘contact sense’ provides a more obvious occasion for speaking of sensations than do ‘distance senses’ such as sight). Our tactual awareness of the texture of a surface is, to use a metaphor, ‘coloured’ by the nature of the sensations that the surface produces in our skin, and which we can be explicitly aware of our attention is drawn to them (something that gives one indication of how attention too is involved in perception).

It has been argued that the phenomenal character of an experience is detachable from its conceptual content in the sense that an experience of the same phenomenal character could occur even if the appropriate concepts were not available. Certainly the reverse is true ~ that a concept-mediated awareness of an object could occur without any sensation-mediated experience ~ as in an awareness of something absent from us. It is also the case, however, that the look of something can be completely changed by the realization that it can be thought of in a certain way, so that it is to be seen as ‘x’ rather than ‘y’. To the extent that, which is so, the phenomenal character of a perceptual experience should be viewed as the result of the way in which sensations produced in us by objects blend with our ways of thinking of and understanding those objects (which, it should be noted, are things in the world and should not be confused with the sensations which they produce).

Seeing things in certain ways also sometimes involves the imagination, as, perhaps, in the imagination we may bring to bear a way of thinking about an object which may not be visually imaginative, as an artist may have to be, is at best a special case of our general ability to see things as such and such’s. But that general ability is central to the faculty of visual perception and, of the faculty of perception in general. What has been said may be enough to indicate the complexities of the notion of perception and how many different phenomena have to be taken into consideration in elucidating that notion within the philosophy of mind. But the crucial issue, perhaps, is how they are all to be fitted together within what may still be called the ‘workings of the mind’.

The last two decades have been a period of extraordinary change in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level visual processing, has become a ~ perhaps, the ~ dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically. The goal of which these interactions have been areas in which these interactions have been most productive, or at least, most provocative.

One of the central goals of the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploited in the sciences. Another common foal is to construct philosophically illuminating analyses or explications of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. The philosophy of physics is another area in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not, and typically do not, assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of the theories, concepts and explanatory strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by te scientists themselves.

Cognitive psychology is in many ways as curious and puzzling science. Many of the theories put forrard by cognitive psychologist’s make use of a family of ‘intentional’ concepts ~ like believing that ‘p’. Desiring that ‘q’, and representing ‘r’ ~ which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories. People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, some of which may themselves be representational, are said to result in mental states which represent (or, sometimes misrepresent) one or another aspect of the cognitive agent’s environment.) While cognitive psychology occasionally say a bit about the nature of intentional concepts and the explanations that exploit them, their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile ground for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Jerry Alan Fodor (1935- ), who believes that mental representation should be conceived as individual representations with their own identities and structured states formulae transformed by processes of commutations or thought. Yet, Fodor’s ‘The Language of Thought’ (1975) was a pioneering study in this genre, that continues to have a major impact on the field.

These philosophical account of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientist’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that philosophers have just gotten it wrong. There is, however, a very different way in which philosophers have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional notions have been criticized on various grounds. The two that are considered are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

It is, nonetheless, that according to the sentential theory that objects of belief are sentences. Some sententialists maintain that public sentences are the objects of belief, Gilbert Ryle, for example, seems to have held that to believe that ‘p’ is to be disposed to assent to some natural language sentence that means that ‘p;. And Donald Davidson is usually read as accepting a version of the public sentence approach. The dominant version of the sentential theory, however, is the view that the objects of belief are private sentences. This view goes hand in hand with the computational conception of the mind. Since words are also about things, it is natural to ask how their intentionality is connected to that of thoughts. Two views have been advocated, as one view takes thought content to be self-subsisting, relative to linguistic content, and with the latter view dependent upon the former: The other view takes thought content to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. The relation to language and thought is arguably felt, that a language is an abstract pairing of expressions and meanings, a function, in the set-theoretic sense, from expressions onto meaning. This makes sens of the fact that it explains why is a language no one speaks, and it explains why it is that, while it is a contingent fact that, ‘La neige est blanche’ means that snow is white among the French, it is a necessary truth that it means that in French and English are abstract objects in his sense, then they exist whether or not anyone speaks them: They even exist in possible worlds in which there are no thinkers in this respect, then language, as well as such notions as meaning and truth in a language, is priori too thought.

Nevertheless, computers are symbol manipulators: They transform symbols in accordance with fixed syntactic rules and thereby process information. If the mind is a computer then its states must themselves be symbolic states in whatever inner language the mind employs. So, belief must involve a relation to a string of symbols such that the string is a sentence, and presents as its natural language counterpart, whatever sentence is used to specify the content of the belief in a public context. So, on the dominant version of the sentential theory, believing that nothing succeeds like excess, say, is a matter of the mind standing in a certain computational relation (distinctive of belief) to a sentence which means that nothing succeeds like excess. This sentence is physically realized in the brain by some neurophysiological state of content, just as symbol strings in electric computers are physically realized by charged states of grids or patterns of electrical pulses.

The sentential theory involves no explicit commitment to abstract propositions. But propositions can still reach the analysis of what it is for a given sentence to mean that such and such is the case. Thus, it is a mistake to suppose that a sentential approach to belief automatically repudiates propositions.

There are four min dominant versions of the sentential theory usually adduce as motivating their view. To begin with. It is that the view that the mind is a computer is one that has considerable empirical support from cognitive psychology. The sentential theory, then. Is an empirically plausible theory, one that supplies a mechanism for the relation that propositionalists take to obtain between minds and propositions? The mechanism id mediation by inner sentences.

Secondly, the sentential theory offers a straightforward explanation for the parallels that obtain between the objects and contents of speech acts and the objects of contents of belief. For example, I may say that I believe. Furthermore, the object of believing like the object of saying, can have semantic properties. We may say, for example.

What Jones believes is true

and:

What Jones believes entails what Smith believes

One plausible hypothesis, then, is that the object of belief is the same sort of entity as what is uttered in speech acts (or, what is written down).

The sentential theory also seems supported by the argument, that the ability to think certain thoughts appears intrinsically connected with the ability to think certain others. For example, the ability to think that John hit’s Mary goes hand in hand that Mary hits John, but not with the ability to think that Toronto is overcrowded. Why is this? The ability to produce or understand certain sentences is intrinsically connected with the ability to produce or understand certain others. For example, there are no native speakers of English who know how to say ‘John hits Mary’ but who do not know how to say ‘Mary hits John’. Similarly, there are no native speakers who understand the former sentence but not the latter. These facts are easily explained if sentences have a syntactic and semantic structure. But if sentences are taken to be atomic, these facts are a complete mystery. What is true for sentences is true also for thoughts. Thinking thoughts involves manipulating representations with a propositional content have a semantic and syntactic structure like that of sentences, it is no accident that one who is able to think that John hit’s Mary is thereby also able to think that Mary hits John. Furthermore, it is no accident that one who can think these thoughts need not thereby be able to think thoughts having different components ~ for example, the thought that Toronto is overcrowded. And what goes here for thought goes for belief and the other propositional attitudes.

Consider the inference from:

Rufus believes that the round object ahead is brown

And:

The round object ahead is the coin Rupert dropped

To:

Rufus believes that the coin Rupert dropped is brown

This inference is strictly parallel to the inference from:

Rufus uttered the sentence ‘The round object ahead is brown

And the round object ahead is the coin Rupert dropped to:

Rufus uttered the sentence ‘The coin Rupert dropped is brown’

If the immediate object of belief are sentences, we should ‘expect’ the former inference to be invalid just as the latter is.

Another motivating factor is the thought that, since the pattern of causal interactions among beliefs mirrors various inferential relations among the sentences that entail relations among the sentences that are ordinarily used to specify the object of beliefs have logical form. For example, corresponding to the inference from:

All dogs make good pets

And:

All of Jane’s animals are dogs

To:

All of Jane’s animals make good pets

We have the fact that, if John believes that all dogs make good pets and he later comes to believe that all of Jane’s animals are dogs, he will, in all likelihood, be caused to believe that all of Jane’s animals make good pets. Generalizing, we can say that a belief of the form

All F’s are G’s

Together with a belief of the form

All G’s are H’s

Typically causes a belief of the form

All F’s are H’s



This generalization concerns belief alone. But there are also, generalizations linking belief and desire. For example, a desire of the form:

Do A.

Together with a belief of the form:

In order to do A, it is necessary to do B

Typically generates a desire of the form:

Do B.

Now these generalizations categorize beliefs and desires according to the logical form of their object. They therefore require that the object have logical forms. But the primary possessors of logical form are sentences. Hence the (immediate) objects of beliefs and desires are themselves sentences.

Advocates of the propositional theory sometimes object to the sentential approach on the grounds that it is chauvinistic. Maybe our beliefs are represented in our heads in the form of sentences in a special mental language, but why should all beliefs necessarily be so represented in all possible creatures? For example, could not belief tokens the form of graphs, maps, pictures, or other form dissimilar to any of our public forms of representation?

This objection is based on a misunderstanding. The sentential theory is not normally presented as an analysis of the essence of belief, of what is common to all actual and possible believers in virtue of which they have beliefs. So, it has nothing to say about the beliefs of angels, say, or other possible believers. Rather it is a theory of how belief is actually realized in us.

There is another important class of beliefs, however. These are standardly attributed using predicates of the form, believes of ‘x’ that it is ‘F’. Beliefs of this sort are called ‘de re beliefs’. Consider, for example, my believing of the building I am facing that it is an imposing structure. This is a belief with respect to a particular building, however, that building is described. Suppose, for example, that building is St. Paul’s Cathedral. Then, in believing of the building I am thereby believing of St. Paul’s that it is an imposing structure. So, for a belief to be ‘de re’ with respect to some object ‘θ’ which the belief is about. By contrast, if I simply believe that the building I am facing is imposing ~ this is the ‘de dicto’ case ~ I need not believe that the building I am facing is St. Paul’s. moreover, I t is not a condition of my having the belief that the building I am facing is before me. I might, for example, be under the influence of some drug, which has caused me to hallucinate a large building.

De re beliefs, then, are beliefs held with respect to particular things or people. However, described that they have such and such properties. On the propositional theory, such beliefs are often taken to require that the given thing or person itself believing of Smith that he is dishonest is a matter of standing in the belief relation to the proposition that Smith is dishonest, where this proposition is a complex entity having the person, Smith as one of its components.

The sentential theory can account for ‘de re’ belief in a similar fashion. The assumption now is that the inner sentence is a singular one (consisting in the simplest case of a name concatenated with a predicate). This sentence has, as its meaning, a proposition which meets the above requirement (assuming a propositional approach to sentence meanings).

Of the two theories the sentential view probably has the wider support in philosophy today. However, as earlier, comments should have made clear, the two theories are not diametrically opposed to one another. For the sentential theory, unlike the propositional view of belief. Moreover, its advocates are not necessarily against the introduction of abstract propositions.

There is one further feature worth commenting upon that is common to both the theories. This is their acceptance of the relational character of belief. The primary reason for tasking belief to be relational is syntactic objects. For example:

Jones believes that gorillas are more intelligent than chimpanzees

Entails:

There is something Jones believes.

Not all philosophers accept that existential generalizations like this one should be taken at face value as indicating a metaphysical commitment to some entity which is the believed object. However, unless some strong argument can be given which show that this case is anomalous, it is surely reasonable to existential generalization, and hence to grant that there really are objects in which we are related in belief.

The hypothesis especially associated with Fodor, that mental processing occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the Chomskyan notion of an innate universal grammar. It is a way of drawing the analogy between the workings of the brain or mind and those of a standard computer, since computer programs are linguistically complex sets of instruments whose execution explains the surface behaviour of the computer. As an explanation of ordinary language learning and competence the hypothesis has not found universal favour. It apparently only explains ordinary representational powers by invoking innate things of the same sort, and it invites the image of the learning infant translating the language surrounding it back into an innate language whose own powers are a mysterious biological given.

Thought, in having contents, possess semantic properties. The syntax/semantic distinctions seem straightforward, but there are deep issues in linguistics and the philosophy of language tying in wait to make things more complex. First of all, though syntax is a formality, there are many possible levels of such form. Thus, knowing that a sentence is of the subject-predicate sort is a fairly sophisticated level of formal description. One must know something about grammatical categories to appreciate it. Consider ‘The cat is on the mat’, whenas of saying that it contains 16 letters and 5 spaces, or that it is composed of certain kinds of black-on-white shapes are descriptively no less formal, though they can be appreciated without any background grammatical knowledge.

However, the complications really multiply in respect of semantics. It is one thing to say that the semantics of a sentence is its meaning. It is another to say what meaning is, or even to say how one would go about describing the meaning of words or sentences. Is it enough to say that the sentence ‘The cat is on the mat’ expresses the fact that the cat is on the mat? On the one hand, this seems uninformative ~, in so that we can imagine it was the sole explanation of the meaning of this sentence. On the other hand, it is not clear how to understand ‘expresses the fact that’,may, however.

Most apparent, are the exactly proper legitimate formalities that in a theory of meaning should take, and whatever the level of syntactical descriptions are most appropriate to a better understanding of language. Problems for linguistic and philosophers of language. But the notions of syntax and semantics also play an important part in philosophy of mind. This arises because it is widely maintained that words and sentences are not the only kinds of things that have syntax and semantics: In one way or another these features have been claimed for mental phenomena such as beliefs and other propositional attitudes. The range of representational systems humans understand and regularly use is surprisingly large, in that distinguishes items that serve as representations from other objects or events. There has ben general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, ‘direct to or denoting’ something else. Thus, there is a view known as the ‘language of thought’ theory which maintains that beliefs are syntactically characterized items in the mind/brain and that they are semantically evaluable. According to this account, wee can best explain, for example, Smith’s belief that snow is white as his having in his mind/brain a token of a language of thought sentence ~ a sentence with some kind of syntax ~ which has as a semantic value the appropriate relation to snow and whiteness. Also, many not committed to the idea of a language of thought would still believe there to be a semantics of attitude states. So, the very difficult issue of how to describe the semantical relations carried over from the philosophy of language to the philosophy of mind. It is often called the ‘problem of intentionality’, though this label covers other issues as well.

Beliefs are true or false. If, as representationalism had it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Sentences, at least declaratives, are exactly the kind of representation that have truth values, this in virtue of denoting and attributing. So, if mental representations says, we could readily account for the truth valuation of mental representations.

Beliefs serve a function within the mental economy. They play a central part in reasoning and thereby, contribute in the control or behaviour in various ways. This core notion of rationality in philosophy of mind thus concerns a cluster of personal identity conditions, that is, holistic coherence requirements upon the system of elements comprising a person’s mind. A person’s putative beliefs must mesh with the person’s desires and decisions, or else they cannot qualify as the individual’s beliefs. Similarly, mutatis mutandis, for desires, decisions, and so forth. This is ‘agent-constitutive rationality’ ~ that agents possess it is more than an empirical hypothesis. A related conception is epistemic or ‘normative rationality’: To be rational (that is, reasonable, well-founded, not subject to epistemic criticism), a belief or decision at least must cohere with the rest of a person’s cognitive system ~ for instance, in terms of logical consistency and application of valid inference procedures. Rationality constraints therefore, are key linkages among the cognitive states. The main issue is characterizing these types of mental coherence.

Reason capitalizes on various semantic and evidential relations among antecedently held beliefs (and, perhaps other attitudes) to generate new beliefs to which subsequent behaviour might be tuned. Apparently, reasoning is a process that attempts to secure new true beliefs by exploiting old [true] beliefs. By the lights of representationalism, reasoning must be a process defined over mental representation. Sententialism tells us that the type of representation in play in reasoning is most likely sensational ~ even if mental -, representations. Possibly, in reasoning mental representations stand to one another just as do public sentences in valid formal derivation. Reasoning would then preserve truth of belief by being the manipulation of truth-valued sentential representations according to rules so selectively sensitive to the syntactic properties of the representations as to respect and preserve their semantic properties. The sententialists hypothesis is thus that reasoning is formal inference: It is a process tuned primarily to the structure of mental sentences. Reasoners, then, are things very much like classically programmed computers.

Would that the story could be so tidily told. Arguably we have infinitely many beliefs. Yet, certainly the finitude of the brain or relevant representational devices defies an infinity or corresponding representations. So preserving Sententialism requires disavowing the apparent infinitude of beliefs between (finitely many) actual beliefs ~ these being relations to actual Mentalese sentences ~ and (infinitely many) dispositional beliefs ~ these being the unactualized but potential consequences of their actual counter-parts. But this distinction in hand, the mind ~ as a sentential processor ~ is able so elegantly to manage and manipulate its actual beliefs so as regularly to produce the new beliefs rationally demanded of it in response to detectable environmental fluctuations. This and other related matters lead to notoriously difficult research problems whose solution certainty bears on or upon the plausibility of the language of thought. The sententialists must admit that if these problems finally prove intractable, then whatever warrant sententialists might otherwise have had evaporated. But this aside, there are additional reasons in abductive support of Sententialism.

Nevertheless, representationalism is launched by the assumption that psychological states are relational, that being in a psychological state minimally involves being related to something. But, perhaps, psychological states are not at all relational. Might not the logical form of Peter Abelard (1078-1142), a controversial figure, he found his work condemned in 1121, and his scepticism about the legends of St. Dionysius forced him to leave the Abbey of St. Denis? Abelard wrote extensively on the problem of universals, probably adopting a moderate ‘realism’, although he has sometimes been claimed as a ‘nominalist’. Also, writing commentaries on ‘Porphyry’ and other authorities. His ‘Scito te psum’ (‘know thyself’), is a treatise on ethics holding that sin consists entirely in contempt for the wishes of God, action is therefore less important than states of mind such as intention. Abelard’s contribution to logic have been the object of recent admiration. Abelard not a relation to anything but simply the monadic property of thinking in certain ways, however. Adverbialism begins by denying that expressions of psychological states are relational, infers that psychological states themselves are monadic and, thereby, opposes classical versions of representationalism, including Sententialism.

Adverbialism aspires to ontological simplicity in eschewing the existence of entities as theoretically recondite as mental representations. Nonetheless, it is hard pressed plausibly and simply to explain what in intuitivistically semantical yet, common to Abelard’s thoughts that are supposed monadic properties of thinking are, apparently no more mutually similar than either is to the property of thinking. It is, after all, only an orthographic accident and totally without significance that the predicates for the first two properties have portions of their spelling in common. Thus, unless Adverbialism allows for internally complete properties ~ in which case it seems to have no metaphysical advantage over its relational rival ~ it seems unable to meet the psychological facts.

A semantic theory relates pieces of language to pieces of the world. We use language to talk about the world, and express our thoughts, which are also about the world. (The ‘aboutness’ of though it is often called ‘intentionality’). The relationship between talk, thought and the world, which is explored in the philosophy of language, the philosophy of mind and metaphysics.

Thus, for example, we might try to give a philosophical account of some distinctions in reality ~ say, between objects and properties, or between particulars and universals ~ in terms of differences among words or in terms of differences in the realm of thought, provided that we already had some understanding of those linguistic or mental differences. Or, going the other way about, we might assume some account of the metaphysical differences, and use it in our philosophical words in the domains of talk or thought. There are also important questions of categorical priorities between philosophy of language and philosophy of mind. Indeed, any strategy for elucidating the concept of linguistic meaning will inevitably depend on our general view about priority as between talk and thought.

Suppose that we accept the intentionality of though, does this remove all force from the argument it does not, if one accepts some connection between what we can conceive and what we can imagine. Whenever we imagine an object we imagine what it would be like perceiving room of a certain view-point: We attempt to conceive of the object ass it is independently of some possible perceptual perspective would have to be more abstract than a concrete imagination. As a physical object is an empirical object, with empirical properties, it might seem that thee was something peculiar about the idea that it possesses a mode of existence that could not be represented imaginistically, that is, in a form in which those empirical properties are actualized.



The natural reply to this is that a good perspective on an object enables one to form a conception of the object as it is. This is most simply represented by a clear view of a flat surface, which enables one to see it not merely from a perspective but as it is in its own plane. Our visual perception comes to be structured in three dimensions, so its having a perspective does not force us into having a merely abstract conception of the object in its own space, as it would do if vision were two dimensional and distance was only inferred.

The most significant feature of thought is its ‘intentionality’ of ‘content’: In thinking , one thinks about certain things, and one thinks certain things of those things ~ one entertains propositions that stand for states of affairs. Nearly all the interesting properties of thoughts depend upon their content: Their being coherent or incoherent, disturbing or reassuring, revolutionary or banal, connected logically or illogically to other thoughts bother talking purposively as to recognize the intentionality of thought. So we are naturally curious about the nature of content as we want to understand what makes it possible, what constitutes it, what it stems from. To have a theory of thought is to have a theory of its content.

Four issues have dominated recent thinking about the content of thought. Each may be construed as a questions about what sequence of its so depending for not depending. These potential dependencies concern: (1)The world outside of the thinker himself (2) language (3) logical truth (4) consciousness. In each case the question is whether intentionality is essentially, or accidentally related to the items mentioned: Does it exist, that is, only by courtesy of the dependence of thought on the said items? And this question determines what the intrinsic nature of thought.

Thoughts are obviously about things in the world, but it is a further question whether they could exist and have the content they do whether or not their putative objects themselves exist. Is what I think intrinsically dependent upon the world in which I happen to think it? This question was given impetus and definition by a thought experiment due to the American philosopher Hilary Putnam (1926- ), concerning a planet called Twin-Earth. On Twin-Earth. Whom of which there live thinkers who are duplicate copies of us, in all internal respects but whose surrounding environment contain different kinds of natural objects, and so forth. The key point is that since it is not possible to individuate natural kinds, in that things are solely by reference to the way they strike the people who think about particular things, for which thinking about them cannot be a function simply of internal properties of the thinker. Thought content is relational in nature, whereby it is fixed by external facts as they bear upon the thinker. Much the same point can be made by considering repeated demonstrative reference to distinct particular objects: What I refer in when I say ‘that ‘bomb’, of different bombs, depends on or upon the particular bomb in font of me and cannot be deduced from what is going on inside me. Context contributes to content.

Inspired by such examples, many philosophers have adopted an ‘externalist’ view of thought content: Thoughts are not autonomous states of the individual, capable of transcending the contingent facts of the surrounding surfaces inherent to the perceptions of our world. One is therefore not free to think whatever one likes, as it were, whether or not the world beyond cooperates in containing suitable referents for those thoughts. And this conclusion has generated a number of consequential questions. Can we know our thoughts with special authority, given that they are thus, hostage to external circumstances? How do thoughts cause other thoughts and behaviour, given that they are not identical with any internal stares we are in?

To believe a proposition is to accept it as true, and it is relative to the objective of reaching truth that the rationalizing relations between contents are set for belief. They must be such that the truth of the premises makes likely the truth of the condition, making clear this justificatory link. Paradigmatically, the psychological states that provide an agent with reasons are intentional states of individuation in terms of their propositional content, as such is the traditional emphasis that the reason-giving relation is a logical or conceptual link of bringing the nature of this conceptual representation for actions that provide intentional states other than beliefs.

We might say, that the objective of desires is their own satisfaction. In the case of reason for acting therefore, we are looking for a relationship between the content and the agent’s intentional states and the description of the action which show that performing an action of that kind has some chance of promoting the desired goals. The presence of a reason for believing or acting does not necessarily make a rational for an agent to believe or act in that way. From the agent’s point of view, overall she may have other beliefs which provide conflicting evidence, or conflicting desires. To establish what is rational to believe or do in general, of what we needs to take into account principles for weighing competing beliefs and desires. Of course, we do not always believe what is rational or act in the light of what we judge best, e.g., as, cases of self-deception and weakness of will show this. However, a minimum of rationality must be present in the pattern of a person’s belief, desire, intentions, and actions before they can be regarded as an agent with intentional states at all.

Nonetheless, for some writers the justificatory and explanatory tasks of reason-giving d simply coincide. The manifestation of rationality is seen as sufficient to reexplain or acts quite independently of questions regarding casual origin. Within this model the greater the degree of rationality we can detect, the ,more intelligible the sequence will be where there is a breakdown in rationality, as in cases of weakness of will or self-deception, as there is a corresponding breakdown in our ability to make the action/belief intelligible.

Once, again, the justificatory and explanatory role of reason cannot simply be equated. To do so fails to distinguish cases where I have reasons for which I believe from which your innocence could be deduced but nonetheless, come to believe you are innocent because you have blue eyes. I my Have intentional states that give me altruistic reasons for giving to charity but nonetheless contribute our of a desire to earn someone’s good opinion: In both these cases, although my belief could be shown to be rational in the light of other beliefs, and my actions in the light of my altruistic states, neither of these rationalizing links could form part of a valid explanation of the phenomena concerned. Moreover, cases of weakness of will show that I can have sufficient reason for acting and yet fail to act, e.g., I continue to smoke although I judge it would be better to abstain. This suggests that the mere availability of reasoning, however good, in favour of an action cannot be sufficient to explain why it occurred.

The casual explanatory approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characteristics to extensional ones. In an attempt to fit such intentional causality into a fundamentally materialist world picture. The very nature of the reason-giving relations, however, can be seen to render such reductive projects unrealizable. This, therefore, leaves intensional and non-intentional levels of description in such a way as to accommodate intentional causality, without either over-determination or a miraculous coincidence of prediction from within distinct casually explanatory frameworks.

What has not been considered carefully enough, however, is the scope of the externalists thesis ~ whether it applies to all forms of thought all concepts. For unless this question can be answered affirmatively we cannot rule out the possibility that thought in general depends on there being some thought that is purely internally determined, so that the externally fixed thoughts are a secondary phenomenon. What about thoughts concerning one’s present sensory experience, or logical thoughts, or ethical thought? Could there, indeed, be a thinker for when internalism was generally correct? Is external individuation he rule or the exemption? And might it take different forms in different cases?

Since words are also about things, it is natural to ask how their intentionality is connected to that of thoughts. Two views have been advocated: One view takes thought content to be self-subsisting relative to linguistic content: With the latter dependent upon the former, the other view takes thought content to be derivative upon linguistic content, so that there can be no thought content to be derivative upon linguistic content, so that there can be no thought without fundamental principles of language. Thus arise controversies about whether animals really think, being non-speakers, or computers really use language, being non-thinkers. All such questions depend critically upon what one is to mean by ‘language’. Some hold that spoken language is unnecessary for thought, but that there must be an inner language in order for thought to be possible: While others reject the very idea of an inner language, preferring to suspend thought from outer speech. However, it is not entirely clear what it amounts to, to assert (or deny) that there is n inner language of thought. If it means merely that concepts (thought-constituents) are structured in such a way as to be isomorophic with spoken language, then the claim is trivially true, given one natural assumption. But if it means that concepts just are ‘syntactic’, items orchestrated into strings of the same. Then the claim is acceptable only insofar as syntax is an adequate basis for meaning ~ which on the face of it, it is not. Concepts no doubt have combinatorial powers comparable to those of words, but the question is whether anything else can plausibly be meant by the hypothesis of an inner language.

On the other hand, it appears undeniable that spoken language does not have autonomous intentionality, but instead derives its meaning from the thoughts of speakers ~ though language may augment one’s conceptual capacities. So thought cannot post-date spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but that there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends is that of the structured system of concepts itself: Thought indeed depends upon there being thought indeed depends upon there being isolable concepts that can join with others to produce complex propositions. But ths is merely to draw complete propositions. Nonetheless, ths is merely to draw attention to a property any system of concepts must have: It is not to say what concepts are or how they succeed in moving between thoughts as they do. All in what is the same, appeals to language at this point are apt to founder on circularity, since words take on the powers of concepts only insofar as they express them. Thus, there seems little philosophical illumination to be got from making thought depend upon language.

Least remains in the question that whether intentionality is dependent upon consciousness for its very existence, and if so why. Could our thoughts have the very content they now have if we were not to be conscious beings at all? Unfortunately, it is difficult to see how to mount an argument in either direction. On the one hand, it can hardly be an accident that our thoughts are conscious and that their content is reflected in the intrinsic condition of our state of consciousness: It is not as if consciousness leaves off where thought content begins ~ as it does with, say, the neural basis of thought. Yet, on the other hand, it is by no means clear what it is about consciousness that links it to intentionality in this way. Much of the trouble stems from our exceedingly poor understanding of the nature of consciousness in general. Just as we cannot see how consciousness could arise from brain tissue (the mind-body problem), so we fail to grasp that manner in which conscious states bear meaning. Perhaps content is fixed by extra-conscious properties and relations and only subsequently shows up in consciousness, as various naturalistic reductive accounts would suggest: Or, perhaps, consciousness itself plays a more enabling role, allowing meaning to come into the world, hard as this may be to penetrate. In some ways the question is analogous to, say, the properties of pain: Is the aversive property of pain, causing avoidance behaviour and so forth, essentially independent of the conscious state of feeling pain, being possibly present without the feeling, or is it that questions are analogous to, says could only have in aversive function in virtue of the conscious feelings? This is part of the more general question of the epiphenomenal character of conscious awareness, much as conscious awareness is just a dispensable accompaniment of some mental feature ~ such as content of causal power ~ or as it that consciousness is structurally involved in the very determination of the feature? It is only too easy to feel pulled in both directions on this question, neither alternative being utterly felicitous. Some theorists suspect that our uncertainty over such questions stems from a constitutional limitation to human understanding. We cannot just develop the necessary theoretical tools with which to provide answers to these questions: So we may not in principle be able to make any progress with the issue of whether thought depends upon consciousness and why. Certainly our present understanding falls far short of providing us with any clear route into the question.

Another, but relevant question pertains of what is the relation between mind and physical reality? Well-established schools of thought give starkly opposing answers to this question. The French mathematician and founding father of modern philosophy was René Descartes (1596-1650), insisted that mental phenomena are non-physical in nature. This view seems inviting because mental phenomena are indisputably different from everything else. Moreover, its safe to assume that all phenomena that are not of or relating to the mind have some objectively phenomenal descriptions that are essentially structured in the shaping of nature. So it may seem that the best way to explain how the mental differs from everything else is to hypothesize that mind is non-physical in nature.

But that hypothesis is not the only way to explain how mind differs from everything else. Its also possible that mental phenomena are instead just a special case of physical phenomena: They would then have properties that no other physical phenomena have, but would still themselves be physical. This explanation requires that we specify what is special about mental phenomena which makes them different from everything else. But we must specify that, in any case, just in order to understand the nature of the mental. Characterizing mental phenomena negatively, simply as not being physical, does little to help us understand what it is for something to be mental.

In Descartes’ time, the issue between materialists and their opponents was framed in terms of substances. Materialists such as the English philosopher, mathematician and linguist Thomas Hobbes (1588-1679) and French philosopher and mathematician Pierre Gassendi (1592-1655) who maintained that people are physical systems with abilities that no other physical system have, therefore, are special kinds of physical substance. Descartes’ Dualism, by contrast, claimed that people consist of two distinct substances that interact causally: A physical body and a non-physical unextended substance. The traditional conception of substance, however, introduces extraneous issues, which have no bearing on whether mental phenomena are physical or non-physical. And in any case, even those who agree with Descartes that the mental is non-physical have today given up the idea that there are non-physical substances that people are physical organisms with two distinctive kinds of states: Physical stares such as standing and walking, and mental states such as thinking and feeling.

Accordingly, the issue of whether the mental is physical or non-physical is no longer cast in terms of whether people, and other creatures that have the ability to tink and sense, are physical or non-physical substances. Rather, that question is put in terms of whether the distinctively mental states of thinking, sensing, and feeling are physical states or non-physical stares. The identity theory is the materialist thesis that every mental state is physical, that is, that every mental state is identical with some physical state.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states: Our concept of metal states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As the Cambridge-born Australian philosopher J.J.C. Smart (1920- )who first argued for the identity theory, emphasized the requisite identities do not depend on our concepts of mental states or the meaning of mental terms, for ‘a’ to be the meaning with ‘b’, ‘a’ and ‘b’ must have the same properties, but the terms ‘a’ and ‘b’ need not mean the same. For our agreeing with Joseph Butler, in stating that everything is what it is and not another thing. The difficultly is to know when we have one thing and not two. A rule for telling this is a principle if ‘individuation’ or a criterion of identity for things of the kind in question. In logic, identity may be introduced as a primitive relational expression or defined via the identity of ‘indiscernibles’, as it is sometimes known as the ‘Leibniz law’.





Contemporary philosophy of mind, following cognitive science, uses the term ‘representation’ to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, to refer, to be accurate, and so forth. Representation thus conceived comes in many varieties. The most familiar are pictures, three-dimensional models, e.g., statues, scale model, linguistic text (including mathematical formulas) and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation, which is our real topic, but at which time it falls within any of these or any-other familiar provinces.

The representational theory of cognition and thought is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive-solving a problem, say and those that are not-a patellar reflexive, for example-is just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct, as a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in as far as they implicate representations.

It is tempting to think that thoughts are the mind’s representations: Are not thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided we keep in mind that cognitive science may succulently feature or characterize the act or process of an idea of something in the mind to deliberate properties and contents that are foreign too common-sense. First, most of the representations hypothesized by cognitive science do not correspond to anything common-sense would recognize as thoughts. Standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common-sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign too common-sense.

However, concepts occupy mental states having content: A belief may have the content that I will catch the train, or a hope may have the content that the prime minister will resign. A concept is something which is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something-a particular object, or property, or relation, or another entity.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person pronoun, or think of himself as the spouse of Julie Smith, or as the person located in a certain room now. More generally, a concept ‘c’ is such-and-such, without believing ‘d’ is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by ‘that . . . ‘ clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.

A fundamental question for philosophy maintains: What individuates a given concept-that is, what makes it the one it is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question (Schiffer, 1987). An alternative approach, favoured by most, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied if a thinker is to possess that concept and to be capable of having beliefs and other contributing attributes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept -‘and’- be individuated by this condition: It is the unique concept ‘C’ to posses which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any two premisses ‘A’ and ‘B’, ‘ABC’ can be inferred, and from any premiss ‘ABC’, each, of ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement which individuates a concept by saying what is required for a thinker to possess it can be described as giving the ‘possession condition’ for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’, does not. We can also expect to use relatively observational concepts in specifying the kind of experiences, least of mention, to which have to be made in defence of the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attributes attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account which was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the others. Two of the families which plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0, so-and-so’s, there is 1 so-and-so, . . . , and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holism’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to poses them are to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concept treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various way’s make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations to the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a ‘correctness condition’ for that judgement, a condition which is dependent in part on or upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’; even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object, or property, or function . . . which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept than another, this proposal would also have another virtue. It would also allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object had the property which in fact makes the judgement practices in the possession condition yield true judgements, or truth-preserving inferences.

What is more that innate ideas have been variously defined by philosophers either as ideas consciously presented, to the mind before sense experience (the-dispositional sense), or as ideas which we have an innate disposition to form, though we need not be actually aware of them at any particular time, e.g., as babies ~ in cases in a dispositional sense?

Understood in either way they were invoked to account for our recognition of certain truths without recourse to experiential truths without recourse verification, such as those of mathematics, or justify certain moral and religious claims which were held to be capable of being known by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In as far as concepts are taken to be innate, the doctrine relates primarily to claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some proposition cannot be justified solely on the basis of an appeal to sense experience. Wherefore, Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection. Since there was no plausible post-natal source the recollection must refer to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supposed the thoughts that there were important truths innate in human beings and the senses hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our ideas of God, for example, and our coming to recognize that God must necessarily exist, are, Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry Moore and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated dispositional version of the theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/a posteriori did nothing to encourage a return to the innate idea’s doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Nevertheless, according to Kant, our knowledge arises from two fundamentally different faculties of the mind, sensibility and understanding. He criticized his predecessors for running these faculties together. Leibniz for treating sensing as a confused mode of understanding and Locke for treating understanding as an abstracted mode of sensing. Kant held that each of the faculties operates with its own distinctive type of mental representation. Concepts, the instruments of the understanding, are mental representations that apply potentially to many things in virtue of their possession of a common feature. Intuitions, the instrument of sensibility, are representations that refer to just one thing and to that thing is played in Russell’s philosophy by ‘acquaintance’ though intuition’s objects are given to us, Kant said; ‘Through concepts they are thought’.

Nonetheless, it is famous Kantian Thesis that knowledge is yielded neither by intuitions nor by concepts alone, but only by the two in conjunction, ‘Thoughts without content are empty’, he says in an often quoted remark, and ‘intuitions without concepts are blind’. Exactly what Kant means by the remark is a debated question, however, answered in different ways by scholars who bring different elements of Kant’s text to bear on it. A minimal reading is that it is only propositionally structured knowledge that requires the collaboration of intuition and concept: This view allows that intuitions without concepts constitute some kind of non-judgmental awareness. A stronger reading is that it is reference or intentionality that depends on intuition and concept together, so that the blindness of intuition without concept is its referring to an object. A more radical view, yet is that intuitions without concepts are indeterminate, a mere blur, perhaps nothing at all. This last interpretation, though admittedly suggested by some things Kant says, is at odds with his official view about the separation of the faculties.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In as far as concepts are taken to be innate, the doctrine relates primarily to claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have a source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case, others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainty. Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These argument s are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible (the incompatibility thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, however, the two may also coexist of the separability thesis.

The incompatibility thesis is sometimes traced to Plato in view of his claim that knowledge is infallible while belief or opinion is fallible (in the ‘Republic’). Nonetheless this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps knowledge involves some factor that compensates for the fallibility of belief.

A.Duncan-Jones cites linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I don’t believe she is guilty. ‘I know she is’, where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You did not hurt him, you killed him’.

H.A. Prichard (1966) offers a defence of the incompatibility thesis which hinges on the equation of knowledge with certainty, as both infallibility and psychological certitude gives the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that knowledge never does, believing something rules out the possibility of knowing it. Unfortunately, Prichard gives us no-good reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, only to suggest that we are completely confident is bizarre.

A.D.Woozley (1953) defends a version of the separability thesis. Woozley’s version which deals with psychological certainty rather than belief, whereas knowledge can exist without confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley’s remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still, I know its correct’. Nonetheless, this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am sure of whether such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example, which Walter has forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgo t that he took history, he considers his correct responses to be no more than guesses. Nonetheless, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A fortiori he would deny being sure, or having the right to be sure, that, nonetheless that 1066 was the correct date. Radford would nonetheless insist that Walter, however, do know when the Battle occurred, since clearly he remembered the correct date. Radford admits that it would be inappropriate for Walter to say that he knew when the Battle of Hastings occurred, least of mention, that Woozley attributes the impropriety to a fact about when it is not appropriate to claim knowledge. When we claim knowledge, we ought, at least, believe that we have the knowledge we claim, or else our behaviour is, intentionally misleading’.

Those who agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Walter lack’s beliefs about English history is plausible on this Cartesian picture since Walter does not find himself with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviorist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Walter gives the correct response when queried, a form of verbal behaviour, a behaviorist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M.Armstrong (1973) takes a different tack against Radford, Walter does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, however, Armstrong suggests that Walter believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. What is more that Armstrong insists, Walter also believes that the Battle did occur in 1066? After-all, had Walter been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Walter’s false belief about the Battle became unconscious over time but persisted as a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one in which Walter’s true belief became unconscious but persisted long enough to cause his guess. Wherefore, Walter consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to claim that knowledge entails belief.

Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him. If Armstrong is correct in suggesting that Walter believes both that 1066 is and that is not the date of the Battle of Hastings, one might deny Walter knowledge on the grounds that people who believe the denial of what they believe cannot be aid to know the truth of their belief. Another strategy might be to liken the examinee case to examples of ignorance given in recent attacks on ‘externalisms’. This account of knowledge (needless to say, externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1895): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she arrived at her belief about the whereabouts of the President through the power of her clairvoyance. Yet, surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But, Radford’s examinee is little different. Even if Walter lacks the belief which Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Walter’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, Walter has every reason to suppose that his response is merely guesswork, and so he has every reason to consider his belief as false. His belief would be an irrational one, and wherefore, one about whose truth Walter would be ignorant.

The externalism/internalism distinction has been mainly applied if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any explicit explication. Also, it has been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

Perhaps the clearest example of an internalist position would be a foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Similarly, a coherentist view could also be internalist, if both he beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible.

Also, on this way of drawing the distinction, a hybrid view to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalists view. Noticeably, its view that was externalist in relation to forms or versions of internalist, that by not requiring that the believer actually be aware of all justifying factors could still be internalist in relation for which requiring that he at least be capable of becoming aware of them.

The most prominent recent externalist views have been versions of reliabilism, whose main requirement for justification is roughly that the belief be produced in a way or via a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in accepting it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply charged the subject.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language more specifically from the various phenomena concerning natural kind terms, indexical, and so forth. That motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. , -not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors concerning the environment, the n knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification: That, if part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible: Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can as well be justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

If the world is radically different from the way it appears, to the pointy that apparent epistemic vices are actually truth-conducive, presumably his should not make us retrospectively term such vices ‘virtues’ even if they are and have always been truth-conducive. Suggestively, it would simply make the epistemic virtue qualities which a truth-desiring person would want to have. For even if, unbeknown to us, some wild sceptical possibility is realized, this would not affect our desires (it being, again, unknown). Such a characterization, moreover, it would seem to fit the virtues in our catalogue. Almost by definition, the truth-desiring person would want to be epistemically conscientious. And, given what seem to be the conditions about human life and knowledge, the truth-desiring person will also want to have the previously cited virtues of impartiality and intellectual courage.

Are, though, truth and the avoidance of error rich enough desires for the epistemically virtuous? Arguably not. For one thing, the virtuous inquirer aims not so much at having true beliefs as at discovering truths-a very different notions. Perpetual reading of a good encyclopaedia will expand my bank of true beliefs without markedly increasing human-kinds basic stock of truths. For Aristotle, too, one notes that true belief is not, as such, even a concern: The concern, is the discovery of scientific or philosophical truth. But, of course, the mere expansion of our bank of truths-even of scientific and philosophical truths-is not itself the complete goal of its present. Rather one looks for new truths of an appropriate kind-rich, deep, explanatorily fertile, say. By this reckoning, then, the epistemically virtuous person seeks at least three related, but separate ends, to discover new truths, to increase one’s explanatory understanding, to have true than false beliefs.

Another important area of concern for epistemologists is the relation between epistemic virtue and epistemic justification. Obviously, an epistemically virtuous person must itself, I take it, be virtuous. But is a virtuously formed belief automatically a justified one? I would hold that if a belief is virtuously formed, this fully justifies that person in having it: However, the belief itself may lack adequate justification, as the evidence for it may be, through no fault of this person, still inadequate. Different philosophers on this point or points, are, however, apparently to have different intuitions.

Hegel’s theory of justification contains both ‘externalist’ and ‘coherentist’ elements. He recognizes that some justification is provided by percepts and beliefs being generated reliably by our interaction with the environment. Hegel contends that full justification additionally requires a self-conscious, reflective comprehension of one’s beliefs and experiences which integrate them into a systematic conceptual scheme which provides an account for them which is both coherent and reflexively self-consistent.

Hegel contends that the corrigibility of conceptual categories is a social phenomenon. Our partial ignorance about the world can be revealed and corrected because one and the same claim or principle can be applied, asserted and assessed by different people in the same context or by the same person in different contexts. Hegal’s theory of justification requires that an account be shown to e adequate to its domain and to be superior to its alternatives. In this regard, Hegal is a fallibility according to whom justification is provisional and ineluctably historical, since it occurs against the background of less adequate alternative views.

Meanwhile, one important difference between the sceptical approach and more traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we are to explain how knowledge is possible, it is illegitimate to make use of the resources of science, this would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical question arise from within science. It is precisely our success in understanding the world, and thus, in seeing that appearance nd reality may differ, that raises the sceptical question in the first place. We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should it be construed as an empirical question: It is a question about how creatures such as we (given what our best current scientific theories tell us we are like) may come to have our best current scientific theories tell us the world is like. Quine suggests that the Darwinian account of the origin of species give a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment, while an examination of human psychology will fill the details of such an account. Although Quine himself does not suggest it, and so, investigations in the sociology of knowledge are obviously relevant as well.

This approach to sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such scrutiny by many epistemologists. Similarly, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very questions which philosophers have long dealt with. Far from answering the traditional sceptical question it is argued, the naturalistic approach merely changes the topic. Debates between naturalistic epistemologists and their critics, in that frequently focus on whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions which others see as central to epistemological inquiry. Some see the naturalistic approach as an attempt to abandon the philosophical study of knowledge entirely.

In thinking about the possibilities that we bear on in mind, our conscious states, according to Franz Brentano (1838-1917), are all objects of ‘inner perception’. Every such state is such that, for the person who is in that state, it is evident to that person that he or she is in that state, least of mention, that each of our conscious states is not an object of an act of perception, wherefore the doctrine does not lead to an infinite regress.

Brentano holds that there are two types of conscious state-those that are ‘physical’ and those are ‘intentional’ a ‘physical’, or sensory, state is a sensation or sense-impression-a qualitative individual composed of parts that are spatially related to each other. ‘Intentional’ states, e.g., believing, considering, hoping, desiring which are characterized by the facts that (1) they are ‘directed upon objects’. (2) objects may be ‘directed upon’, e.g., we may fear things that do not exist, and (3) such states are not sensory. There is no sensation, no sensory individual, that can be identified with any particular intentional attitude.

Following Leibniz, Brentano distinguishes two types of certainty: The certainty we can have with respect to the existence of our conscious states, and that a priori certainty that may be directed upon necessary truths. These two types of certainty may be combined in a significant way. At a given, moments I may be certain, on the basis of inner perception, that there is believing, desiring, hoping and fearing, and L may also be certain a priori that there cannot be believing, desiring, hoping, and fearing unless there is a ‘substance’ that believes, desires, hopes and fears. In such a case, it will be certain for me [as I will perceive] that there is a substance that believes, desires, hopes and fears. It is also axiomatic. Brentano says, that, if one is certain that a substance of a certain sort exists, then one is identical with that substance.

Brentano makes use of only two purely epistemic concepts, that of ‘being’ certain, or ‘evident’, and that of ‘being probable’. If a given hypothesis is probable, in the epistemic sense, for a particular person, then that person can be certain that the hypothesis is probable for him. Making use of the principles of probability, one may calculate the probability that a given hypothesis has on one’s evidence base.

Nonetheless, if our evidence-base is composed only of necessary truths and the facts of inner perception, then it is difficult to see how it could provide justification for any contingent truths other than those that pertain to states of consciousness. How could such an evidence-base even lend ‘probability’ to the hypothesis that there is a world of external physical things?

What, then, is the problem of the external world? Certainly it is not whether there is an external world as this is taken for granted. Instead, the problem is an epistemological one which, in a rough approximation, can be formulated by asking whether and if so how a person gains knowledge of the external world. However, the problem seems to admit of an easy solution. There is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

An epistemic argument would concede that the main reason for this in that knowledge of objects in the external world seems to be dependent on other knowledge, and so would not qualify as immediate and non-inferential. It is claimed that perceptual knowledge that there is a brown and rectangular table before me, because I would not know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from a preposition about appearances. If so, epistemological direct realism is false.

The significance of this emerges when one asks of a particular application that by what evidence or by what consideration is the best answer, clearly, is to question with which the argument will lead to the problems of the external world and the epistemological direct realism. That is, the crucial question is whether any part of the argument from illusion really forces us to abandon perceptual direct realism. The clear implication of the world perceived from the answer is ‘no’, we may point that a key premise in the relativity argument links to how something appears with direct perception: The fact that the object of appear is supposed to entail that one directly perceives something which is otherwise an attributing state with content. Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years’ old entails that the observer directly perceives something which is worn and dusty and more than two hundred years old (Chisholm, 1964). And there are countless other examples that are similarly like this one.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and he like. Such a move, moreover, requires an argument which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might be.

If the argument from illusion is defused, the major threat facing, perceptual direct realism will have been removed. So, that, there will no longer be any real motivation for the problem of the external world, of course, even if a perceptual direct realism is reinstated, this does not solve that the argument from illusion may suffice to refute all forms of perceptual realism. That problem, nonetheless, might arise even for one who accepts the perceptual direct realism, however, there is reason to be suspicious. What is not clear is whether the dependence is ‘epistemic’ or ‘semantic’. It is epistemic if, in order to understand what it is to see something blue, one must also understand what it is for something to look blue. However, this may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependence relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs about objects appear, but this fact does not impugn their knowledge that they are seeing, e.g., blue objects.

This criticism means that representational states used for the problem of the external world is narrow, in the sense that it focuses only on individual elements within the argument on which the argument seems to be used. Those assumptions, are foundationalist in character: Knowledge and justified belief are divided into the basic, immediate and non-inferential cases, and the non-basic, inferential knowledge and justified belief which is supported by the basic. That is to say, however, though foundationalism was widely assumed when the problem of the external world was given currency in Descartes and the classical empiricists. It has been readily challenged and there are in place well-worked alterative accounts to knowledge and justified belief, some of which seem to be plausible as the most tenable version of foundationalism. So we have some good reason to suspect, quite as one might have initially thought, that the problem of the external world just does not arise, at least not in the forms in which it has usually been presented.

In contrast with the possibility of asking and answering to questions is very closely bound up with the fact that the problem with the external world or direct realism takes place relative to or from a point or points of reference, which does or does not have an origin. In addition to this, the significance of this emerges when one asks, that an object be a unified and coherent segment of the perceived array that can be perceived as having certain properties and as standing in certain relations to other objects (such as the property of having a determinate shape.) One way of putting this distinction, derived ultimately by Alexius Meinong, whose intentional attitude that we ordinarily call ‘perceiving’ and ‘remembering’, provide ‘presumptive evidence’, that is to say, prima facie evidence-for their intentional objects. For example, believing that one is looking at a group of people tends to justify the belief that there is a group of people that one is looking at. How, then, are we to distinguish merely ‘prima facie’ justification from the real thing? This type of solution would seem to call for principles that specify, by reference to further facts of inner perception, the conditions under which merely prima facie justification may become real justification.

Those who speak of prima facie reasons may do so in either of two ways (1) we have a prima facie duty to keep our promise if every action if every action of promise-keeping is to that extent right-if all actions of promise-keeping are the better for it, and (2) an action may be a prima facie duty in virtue of some property it has, in this sense even though it is wrong overall, and so not a ‘duty proper’.

However, what is required is an account of simply describing developmental progress that can be gained or articulated by one’s thoughts. That for developmental considerations do circumscribe the form that such an account will take in virtue of logical positivism, but it cannot be conclusive until we have looked more closely at the bases on which the relevant and distinguishable contents make clear to accommodate a different thought from that to be the functional dynamic areas, from which strongly suggests that in the move from implicit to explicit understanding involves our developing ability than purely reactive, manifestation of the relevant representational abilities.

It was ‘positivism’ in its adherence to the doctrine the within the paradigm of science is the only form of knowledge and that there is nothing in the universe beyond what can in principle be scientifically known. It was ‘logical’ in its dependence on developments in logic and mathematics in the early years of this century which were taken to reveal how a priori knowledge of necessary truths is compatible with a thoroughgoing empiricism.

The exclusiveness of a scientific world-view was to be secured by showing that everything beyond the reach of science is strictly or ‘cognitively’ meaningless. In the sense of being incapable of truth or falsity, and so not a possible object of cognition. This required a criterion of meaninglessness, and it was found in the idea of empirical verification. A sentence is said to be cognitively meaningful if and only if it can be verified or falsified in experience. This is not meant to require that the sentence be conclusively verbified or falsified, since universal scientific laws or hypotheses (which are supposed to pass the test) are not logically deducible from any amount of actually observed evidence. The criterion is accordingly to be understood to require only verifiability or fallibility, in the sense of empirical evidence which would count either for or against the truth of the sentence in question, without having to imply it logically. Verification or confirmation is not necessarily something that can be carried out by the person who entertains the sentence or hypothesis in question, or even by anyone at all at the stage of intellectual and technological development achieved at the time it is entertained. A sentence is cognitively meaningful if and only if it is in principle empirically verifiable or falsifiable.

Anything which does not fulfil this criterion is declared literally meaningless. There is no significant ‘cognitive’ question as to its truth or falsity: It is not an appropriate object of enquiry. Moral and aesthetic and other ‘evaluative’ sentences are held to be neither confirmable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are, at best, expressions of feeling or preference which are neither true nor false. Whatever is cognitively meaningful and therefore factual is value-free. The positivists claimed that many of the sentences of traditional philosophy, especially those in what they called ‘metaphysics’, also lack cognitive meaning and say nothing that could be true or false. But they did not spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding nd perhaps even the improvement of science.

Nevertheless, that our beliefs are not only in bodies, but also in persons, or themselves, which continue to exist through time, and this belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing momentary perceptions, not the mind or self which has them. For Hume (1711-76), there is nothing that really binds the different perceptions together, we are led into the ‘fiction’ that they form a unity only because of the way in which the thought of such series of perceptions works upon the mind. ‘The mind is a kind of theatre, where several perceptions successively make their appearance, . . . there is properly no simplicity in it at one time, nor identity in different: Whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind.

Leibniz held, in opposition to Descartes, that adult humans can have experiences of which they are unaware: Experiences of which effect what they do, but which are not brought to self-consciousness. Yet there are creatures, such as animals and infants, which completely lack the ability to reflect of their experiences, and to become aware of them as experiences of theirs. The unity of a subject’s experience, which stems from his capacity to recognize all his experience as his, was dubbed by Kant ‘ as the transcendental unity of an apperception ~ Leibniz’s term for inner awareness or self-consciousness. But, in contrast with ‘perception’ or ‘outer awareness’ ~ though, this apprehension of unity is transcendental, than empirical, it is presupposed in experience and cannot be derived from it. Kant used the need for this unity as the basis of his attemptive scepticism about the external world. He argued that my experiences could only be united in one-self-consciousness, if, at least some of them were experiences of a law-governed world of objects in space. Outer experience is thus a necessary condition of inner awareness.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a ‘correctness condition’ for that judgement, a condition which is dependent in part on or upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging. ‘For a concept, and consideration how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object, or property, or function . . . which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept than another, this proposal would also have another virtue. It would also allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object had the property which in fact makes the judgement practices in the possession condition yield true judgements, or truth-preserving inferences.

What is more that innate ideas have been variously defined by philosophers either as ideas consciously present to the mind prior to sense experience (the-dispositional sense), or as ideas which we have an innate disposition to form (though we need not be actually aware of them at any particular time, e.g., as babies)-the dispositional sense?

Understood in either way they were invoked to account for our recognition of certain truths without recourse to experiential truths without recourse verification, such as those of mathematics, or justify certain moral and religious claims which were held to be capable of being known by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In as far as concepts are taken to be innate, the doctrine relates primarily ti claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some proposition cannot be justified solely on the basis of an appeal to sense experience. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection. Since there was no plausible post-natal source the recollection must refer to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supposed the view that there were important truths innate in human beings and it was the senses which hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God, for example, and our coming to recognize that God must necessarily exist, are, Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry More and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy y almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated dispositional version of the theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/a posteriori did nothing to encourage a return to the innate idea’s doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Nevertheless, according to Kant, our knowledge arises from two fundamentally different faculties of the mind, sensibility and understanding. He criticized his predecessors for running these faculties together. Leibniz for treating sensing as a confused mode of understanding and Locke for treating understanding as an abstracted mode of sensing. Kant held that each of the faculties operates with its own distinctive type of mental representation. Concepts, the instruments of the understanding, are mental representations that apply potentially to many things in virtue of their possession of a common feature. Intuitions, the instrument of sensibility, are representation that refer to just one thing and to that thing is played in Russell’s philosophy by ‘acquaintance’ though intuition’s objects are given to us, Kant said; Through concepts they are thought.

‘Thoughts without content are empty’, he says in an often quoted remark, and ‘intuitions without concepts are blind’. Exactly what Kant means by the remark is a debated question, however, answered in different ways by scholars who bring different elements of Kant’s text to bear on it. A minimal reading is that it is only propositionally structured knowledge that requires the collaboration of intuition and concept: This view allows that intuitions without concepts constitute some kind of non-judgmental awareness. A stronger reading is that it is reference or intentionality that depends on intuition and concept together, so that the blindness of intuition without concept is its referring to an object. A more radical view, yet is that intuitions without concepts are indeterminate, a mere blur, perhaps nothing at all. This last interpretation, though admittedly suggested by some things Kant says, is at odds with his official view about the separation of the faculties.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as it s content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainty, or acceptance. Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These argument s are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible, that the incompatibility thesis, or by ones who say that knowledge does not entail belief, or vice versa, so ha t each may exist without the other, however, the two may also coexist of the separability thesis.

The incompatibility thesis is sometimes traced to Plato in view of his claim that knowledge is infallible while belief or opinion is fallible (Republic). Nonetheless this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps knowledge involves some factor that compensates for the fallibility of belief.

A.Duncan-Jones cites linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I don’t believe she is guilty. I know she is’, where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You did not hurt him, you killed him’.

H.A.Prichard (1966) offers a defence of the incompatibility thesis which hinges on the equation of knowledge with certainty, as both infallibility and psychological certitude gives the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that knowledge never does, believing something rules out the possibility of knowing it. Unfortunately, Prichard gives us no-good reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, only to suggest that we are completely confident is bizarre.

A.D.Woozley (1953) defends a version of the separability thesis. Woozley’s version which deals with psychological certainty rather than belief, whereas knowledge can exist in the absence of confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still, I know it s correct’. Nonetheless, this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am sure of whether such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example that Walter has forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgot that he took history, he considers his correct responses to be no more than guesses. Nonetheless, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A fortiori he would deny being sure, or having the right to be sure, that, nonetheless that 1066 was the correct date. Radford would nonetheless insist that Walter know when the Battle occurred, since clearly he remembered the correct date. Radford admits that it would be inappropriate for Walter to say that he knew when the Battle of Hastings occurred, least of mention, that Woozley attributes the impropriety to a fact about when it is not appropriate to claim knowledge. When we claim knowledge, we ought, at least, believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.

Those who agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Walter lack’s belief about English history is plausible on this Cartesian picture since Walter does not find himself with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviorist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Walter gives the correct response when queried, a form of verbal behaviour, a behaviorist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M.Armstrong (1973) takes a different tack against Radford, Walter does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that points, however, Armstrong suggests that Walter believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. What is more that Armstrong insists, Walter also believes that the Battle did occur in 1066? After-all, had Walter been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Walter’s false belief about the Battle became unconscious over time but persisted as a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one in which Walter’s true belief became unconscious but persisted long enough to cause his guess. Wherefore, Jan consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to claim that knowledge entails belief.

The externalism/internalism distinction has been mainly applied if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any explicit explication. Also, it has been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

Perhaps the clearest example of an internalist position would be a foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Similarly, a coherentist view could also be internalist, if both he beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible.

Also, on this way of drawing the distinction, a hybrid view to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist position. Obviously too, a view that was externalist in relation to forms or versions of internalist, that by not requiring that the believer actually be aware of all justifying factors could still be internalist in relation for which requiring that he at least be capable of becoming aware of them.

The most prominent recent externalist views have been versions of reliabilism, whose main requirement for justification is roughly that the beliefs are produced in a way or via a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in accepting it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply charged the subject.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth. That motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. , -not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors pertaining to the environment, the n knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification: That, if part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible: Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content of fundamentally being either or vindicate or justly of anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

If the world is radically different from the way it appears, to the pointy that apparent epistemic vices are actually truth-conducive, presumably his should not make us retrospectively term such vices ‘virtues’ even if they are and have always been truth-conducive. Suggestively, it would simply make the epistemic virtue qualities which a truth-desiring person would want to have. For even if, unbeknown to us, some wild sceptical possibility is realized, this would not affect our desires (it being, again, unknown). Such a characterization, moreover, it would seem to fit the virtues in our catalogue. Almost by definition, the truth-desiring person would want to be epistemically conscientious. And, given what seem to be the conditions pertaining to human life and knowledge, the truth-desiring person will also want to have the previously cited virtues of impartiality and intellectual courage.

Are, though, truth and the avoidance of error rich enough desires for the epistemically virtuous? Arguably not. For one thing, the virtuous inquirer aims not so much at having true beliefs as at discovering truths-a very different notion. Perpetual reading of a good encyclopaedia will expand my bank of true beliefs without markedly increasing human-kinds basic stock of truths. For Aristotle, too, one notes that true belief is not, as such, even a concern: The concern, is the discovery of scientific or philosophical truth. But, of course, the mere expansion of our bank of truths-even of scientific and philosophical truths-is not itself the complete goal of its present. Rather one looks for new truths of an appropriate kind-rich, deep, explanatorily fertile, say. By this reckoning, then, the epistemically virtuous person seeks at least three related, but separate ends, to discover new truths, to increase one’s explanatory understanding, to have true than false beliefs.

Another important area of concern for epistemologists is the relation between epistemic virtue and epistemic justification. Obviously, an epistemically virtuous person must itself, I take it, be virtuous. But is a virtuously formed belief automatically a justified one? I would hold that if a belief is virtuously formed, this fully justifies that person in having it: However, the belief itself may lack adequate justification, as the evidence for it may be, through no fault of this person, still inadequate. Different philosophers on this point or points, ae, however, apparently to have different intuitions.

Hegel’s theory of justification contains both ‘externalist’ and ‘coherentist’ elements. He recognizes that some justification is provided by percepts and beliefs being generated reliably by our interaction with the environment. Hegel contends that full justification additionally requires a self-conscious, reflective comprehension of one’s beliefs and experiences which integrates them into a systematic conceptual scheme which provides an account for them which is both coherent and reflexively self-consistent.

Hegel contends that the corrigibly of conceptual categories is a social phenomenon. Our partial ignorance about the world can be revealed and corrected because one and the same claim or principle can be applied, asserted and assessed by different people in the same context or by the same person in different contexts. Hegal’s theory of justification requires that an account be shown to e adequate to its domain and to be superior to its alternatives. In this regard, Hegal is a fallibilist according to whom justification is provisional and ineluctably historical, since it occurs against the background of less adequate alternative views.

Meanwhile, one important difference between the sceptical approach and to a greater extent traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we are to explain how knowledge is possible, it is illegitimate to make use of the resources of science, this would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical questions arise from within science. It is precisely our success in understanding the world, and thus, in seeing that appearance nd reality may differ, that raises the sceptical question in the first place. We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should it be construed as an empirical question: It is a question about how creatures such as we (given what our best current scientific theories tell us we are like) may come to have our best current scientific theories tell us the world is like. Quine suggests that the Darwinian account of the origin of species give a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment, while an examination of human psychology will fill the details of such an account. Although Quine himself does not suggest it, and so, investigations in the sociology of knowledge are obviously relevant as well.

This approach to sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such scrutiny by many epistemologists. Also, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very questions which philosophers have long dealt with. Far from answering the traditional sceptical question it is argued, the naturalistic approach merely changes the topic. Debates between naturalistic epistemologists and their critics, in that frequently focus on whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions which others see as central to epistemological inquiry. Some see the naturalistic approach as an attempt to abandon the philosophical study of knowledge entirely.

In thinking about the possibilities that we bear on in mind, our conscious states, according to Franz Brentano (1838-1917), are all objects of ‘inner perception’. Every such state is such that, the person who is in that state, it is evident to that person that he or she is in that state, least of mention, that each of our conscious states is not an object of an act of perception, wherefore the doctrine does not lead to an infinite regress.

Brentano holds that there are two types of conscious state-those that are ‘physical’ and those are ‘intentional’ a ‘physical’, or sensory, state is a sensation or sense-impression-a qualitative individual composed of parts that are spatially related to each other. ‘Intentional’ states, e.g., believing, considering, hoping, desiring which are characterized by the facts that (1) they are ‘directed upon objects’. (2) objects may be ‘directed upon, e.g., we may fear things that do not exist, and (3) such states are not sensory. There is no sensation, no sensory individual, that can be identified with any particular intentional attitude.

Following Leibniz, Brentano distinguishes two types of certainty: The certainty we can have with respect to the existence of our conscious states, and that a priori certainty that may be directed upon necessary truths. These two types of certainty may be combined in a significant way. At a given moment, I may be certain, on the basis of inner perception, that there is believing, desiring, hoping and fearing, and L may also be certain a priori that there cannot be believing, desiring, hoping, and fearing unless there is a ‘substance’ that believes, desires, hopes and fears. In such a case, it will be certain for me [as I will perceive] that there is a substance that believes, desires, hopes and fears. It is also axiomatic. Brentano says, that, if one is certain that a substance of a certain sort exists, then one is identical with that substance.

Brentano makes use of only two purely epistemic concepts, that of ‘being’ certain, or ‘evident’, and that of ‘being probable’. If a given hypothesis is probable, in the epistemic sense, for a particular person, then that person can be certain that the hypothesis is probable for him. Making use of the principles of probability, one may calculate the probability that a given hypothesis has on one’s evidence base.

Nonetheless, if our evidence-base is composed only of necessary truths and the facts of inner perception, then it is difficult to see how it could provide justification for any contingent truths other than those that pertain to states of consciousness. How could such an evidence-base even lend ‘probability’ to the hypothesis that there is a world of external physical things?

The awareness generated by an introspective act can have varying degrees of complexity. It might be a simple knowledge of (mental) things’ ~ such as a particular perception-episode, or it might be the more complex knowledge of truths about one’s own mind. In this latter full-blown judgement form, introspection is usually the self-ascription of psychological properties and, when linguistically expressed, results in statements like ‘I am watching the spider’ or ‘I am repulsed’.

In psychology this deliberate inward look becomes a scientific method when it is ‘directed toward answering questions of theoretical importance for the advancement of our systematic knowledge of the laws and conditions of mental processes’. In philosophy, introspection (sometimes also called ‘reflection’) remains simply that notice which mind takes of its own operations and has been used to serve the following important functions:

(1) Methodological: However, the fact that though experiments are a powerful addition in philosophical investigation. The Ontological Argument, for example, asks us to try to think of the perfect being as lacking existence and Berkeley’s Master Argument challenges us to conceive of an unseen tree, conceptual results are then drawn from our failure or success. From such experiments to work, we must not only have (or fail to have) the relevant conceptions but also know that we have (or fail to have) them ~ presumably by introspection.

(2) Metaphysical: A philosophy of mind needs to take cognizance of introspection. One can argue for ‘ghostly’ mental entities for ‘qualia’, for ‘sense-data’ by claiming introspective awareness of them. First-person psychological reports can have special consequences for the nature of persons and personal identity: Hume, for example, was content to reject the notion of a soul-substance because he failed to find such a thing by ‘looking within’. Moreover, some philosophers argue for the existence of additional perspectival facts ~ the fact of ‘what it is like’ to be the person I am or to have an experience of such-and-such-a-kind. Introspection as our access to such facts becomes important when we collectively consider the managing forms of a complete substantiation of the world.

(3) Epistemological: Surprisingly, the most important use made of introspection has been in an accounting for our knowledge of the outside world. According to a foundationalist theory of justification an empirical belief is either basic and ‘self-justifying’ or justified in relation to basic beliefs. Basic beliefs therefore, constitute the rock-bottom of all justification and knowledge. Now introspective awareness is said to have a unique epistemological status in it, we are said to achieve the best possibly epistemological position and consequently, introspective beliefs and thereby constitute the foundation of all justification.

Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth and justification where these combine in various ways to yield theories of knowledge, coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have something other that is elsewhere of a preoccupation? The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, the role in inference and implication, for example, I infer different things from believing that I am reading a page in a book than from any other belief, just as I infer that belief from different things than I refer other belief’s form.

The input of perception and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, except that the systematic relations given to the belief specified of the content it has. They are the fundamental source of the content of beliefs. That is how coherence comes to be. A belief that the content that it does because of the away in which it coheres within the system of beliefs, however, weak coherence theories affirm that coherence is one determinant of the content of belief as strong coherence theories on the content of belief affirm that coherence is the sole determinant of the content of belief.

Nonetheless, the concept of the given-referential immediacy as apprehended of the contents of sense experience is expressed in the first person, and present tense reports of appearances. Apprehension of the given is seen as immediate both in a causal sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the ‘given’ maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given ~ if a property appears, then the subject knows this.

Without some independent indication that some of the beliefs within a coherent system are true, coherence is no indication of truth. Fairy stories can cohere, however, our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class with which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and world: They must originate within our descendable inherent perceptions of the world, that when challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. The latter seem more suitable as foundational, since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by Lewis (1946). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that redresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with Kant (1724-1804), who argues that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role, it becomes more plausible that such beliefs are fallible. The argument was developed by Wilfrid Sellars (1963), which according to him, the idea of the given involves a confusion between sensing particulars (having sense impressions), which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances. The former may be necessary for acquiring perceptual knowledge, but it is not itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, but also unsuitable for epistemological foundations. The latter, non-referential perceptual knowledge, are fallible, requiring concepts acquired through trained responses to public physical objects.

Contemporary foundationalists deny the coherentist’s claim whole eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are sound, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implied neither that claims about appearances are less certain than claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent applications of concepts, and this may be so for some claims about appearances, least of mention, coherentists would add that such genuine belief’s stands in need of justification in themselves and so cannot be foundations.

Until very recently it could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it is said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective. They are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact cannot be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to bring about such an apparently radical change? What is its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epistēmē) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind ~ which has been quite intensive ~ has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically differently from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of another.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76), who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978) on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficiently law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained if it is linked by a law to other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) Its omission by effects, as well as effects by causes, after all, it is as easy to deduce the height of the flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it an acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploitrated in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation, generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured items with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Similarly, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’, teleological theories take issue depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935- ) Jerry Alan Fodor, is known for resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holist’ such as the American philosopher, Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett, 1925- In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there has a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, in spite of the fact that although A’s and B’s botheration gives cause by A’s’ every fragmentation is in pieces of its matter in the contestation of conveyance, and, as, perhaps, a conceivable assumption deducing that of only A’s would cause ‘A’s’ in ~ as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical or intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notion that the theory is supposed to naturalize.) Befittingly, the suggestion ~ to put it briefly ~ is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as them are supposed to.

So, then: The teleologies of the cognitive mechanisms determine the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: The teleology story perhaps strikes one as plausible in that it understands one normative notion ~ truth ~ in terms of another normative notion ~ optimality. But this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ ~ when they are working ‘as they are supposed to’ ~ what they deliver are likely to be ‘falsehoods’.

Once, again, there is no obvious reason coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. But this raises the possibility that if we are to say which conditions are optimal for the fixation of a belief, we will have to know what the content of the belief is ~ what it is a belief about. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition ~ perhaps the best way to study it ~ is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation of empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) Walter Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both the ‘mind’ and ‘matter’ by pure reason. Walter Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in a via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) arose in the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary. The distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appears to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination ~ a parallelogram, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world ~ for example, all claims to identity what were then beginning to be called the laws of nature ~ must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in terms of the concept of causality, so that where we are accustomed to talking of laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between

events of the type of the cause and those of the type

of the effect.

2. That the cause event should be contiguous with the

affect events.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions of non-problematically related idea of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlate of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this logical necessity, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that even similar to those we have already observed to be correlated with the cause-type of events will come to be in this case too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of events of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance ~ the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat ~ or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity ~ gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does celestial or supernal space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every effective entity the determinant by which some outward appearance of something as distinguished from the substance for which it is made a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns ~ the ‘laws’ ~ matter more than others ~ the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than is responsible for an effect to happen in reserve to the chance-stantial co-occurrence, and instead postulates the relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forthright Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accidents. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say very little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation ~ and one of the most popular explanations is that the idea of ‘movement’ from earlier to depend on the fact later that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events ~ consider a person who dies after simultaneously being shot and struck by lightning ~ is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

The American philosopher David Lewis (1941-2002), relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative’ induction. From the English diplomat and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely, allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors ~ by what we are studying, as well as by the very act of study itself, the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this: It is apparent that complete understanding concerning the validity of ‘matter of fact’, are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected nor as well as mediately or immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events are simply, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference onto the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ ~ or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume rises are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stems from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. that the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed as of too many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states is simply to its reappearance at the level of the properties of those states.

There are two broad categories of mental property. Mental states such as thoughts and desires, often called ‘propositional attitudes’, have ‘content’ that can be de scribed by ‘that’ clauses. For example, one can have a thought, or desire, that it will rain. These states are said to have intentional properties, or ‘intentionality sensations’, such as pains and sense impressions, lack intentional content, and have instead qualitative properties of various sorts.

The problem about mental properties is widely thought to be most pressing for sensations, since the painful qualities of pains and the red quality of visual sensations seem to be irretrievably non-physical. And if mental states do actually have non-physical properties, the identity of mental states generate to physical states as they would not sustain a thoroughgoing mind-body materialism.

The Cartesian doctrine that the mental is in some way non-physical is so pervasive that even advocates of the identity theory sometimes accepted it, for the ideas that the mental is non-physical underlies, for example, the insistence by some identity theorists that mental properties are really neural as between being mental and physical. To be neural is in this way. A property would have to be neutral as to whether it is mental at all. Only if one thought that being meant being non-physical would one hold that defending materialism required showing the ostensible mental properties are neutral as regards whether or not they are mental.

But holding that mental properties are non-physical has a cost that is usually not noticed. A phenomenon is mental only if it has some distinctively mental property. So, strictly speaking, a materialist, who claims that mental properties are non-physical phenomena subsisting the state or fact of having independently been being actualized in the presence that present a reality that proves to exist. This is the ‘eliminative-Materialist position advanced by the American philosopher and critic Richard Rorty (1979).

According to Rorty (1931- ) ‘mental’ and ‘physical’ are incompatible terms. Nothing can be both mental and physical, so mental states cannot be identical with bodily states. Rorty traces this incompatibly to our views about incorrigibility: ‘Mental’ and ‘physical’ are incorrigible reports of one’s own mental states, but not reports of physical occurrences, but he also argues that we can imagine a people who describe themselves and each other using terms just like our mental vocabulary, except that those people do not take the reports made with that vocabulary to be incorrigible. Since Rorty takes a state to be a mental state only if one’s reports about it are taken to be incorrigible, his imaginary people do not ascribe mental states to themselves or each other. Nonetheless, the only difference between their language and ours is that we take as incorrigible certain reports which they do not. So their language as no less descriptive or explanatory power than ours. Rorty concludes that our mental vocabulary is idle, and that there are no distinctively mental phenomena.

This argument variably rests on or upon the indeterminant contingence of its buildings incorrigibly into the meaning of the term ‘mental’. If we do not, the way is open to interpret Rorty’s imaginary people as simply having a different theory of mind from ours, on which reports of one’s own mental states are corrigible. Their reports would this be about mental states, as construed by their theory. Rorty’s thought experiment would then provide to conclude not that our terminology is idle, but only that this alternative theory of mental phenomena is correct. His thought experiment would thus sustain the non-eliminativist view that mental states are bodily states. Whether Rorty’s argument supports his eliminativist conclusion or the standard identity theory, therefore, depends solely on whether or not one holds that the mental is in some way non-physical.

Paul M. Churchlands (1981) advances a different argument for eliminative materialism. According to Churchlands, the common-sense concepts of mental states contained in our present folk psychology are, from a scientific point of view, radically defective. But we can expect that eventually a more sophisticated theoretical account will relace those folk-psychological concepts, showing that mental phenomena, as described by current folk psychology, do not exist. Since, that account would be integrated into the rest of science, we would have a thoroughgoing materialist treatment of all phenomena, unlike Rorty’s, does not rely of assuming that the mental is non-physical.

But even if current folk psychology is mistaken, that does not show that mental phenomenon does not exist, but only that they are of the way folk psychology described them as. We could conclude they do not exist only if the folk-psychological claims that turn out to be mistaken actually define what it is for some phenomena to be mental. Otherwise, the new theory would be about mental phenomena, and would help show that they are identical with physical phenomena. Churchlands argument, like Rorty’s, depends on a special way of defining the mental, which we need not adopt, it is likely that any argument for Eliminative materialism will require some such definition, without which the argument would instead support the identity theory.

Despite initial appearances, the distinctive properties of sensations are neutral as between being mental and physical, in that borrowed from the English philosopher and classicist Gilbert Ryle (1900-76), they are topic neutral: My having a sensation of red consists in my being in a state that is similar, in respect that we need not specify, even so, to something that occurs in me when I am in the presence of certain stimuli. Because the respect of similarity is not specified, the property is neither distinctively mental nor distinctively physical. But everything is similar to everything else in some respect or other. So leaving the respect of similarity unspecified makes this account too weak to capture the distinguishing properties of sensation.

A more sophisticated reply to the difficultly about mental properties is due independently to the Australian, David Malet Armstrong (1926- ) and American philosopher David Lewis (1941-2002), who argued that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which e identify states as thoughts or sensations will still be neural as between being mental and physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect to capturing the distinguishing properties of sensations and thought.

This casual theory is appealing, but is misguided to attempt to construe the distinctive properties of mental states for being neutral as between being mental, and physical. To be neutral as regards being mental or physical is to be neither distinctively mental nor distinctively physical. But since thoughts and sensations are distinctively mental states, for a state to be a thought or a sensation is perforce for it to have some characteristically mental property. We inevitably lose the distinctively mental if we construe these properties for being neither mental nor physical.

Not only is the topic-neutral construal misguided: The problem it was designed to solve is equally so, only to say, that problem stemmed from the idea that mental must have some non-physical aspects. If not at the level of people or their mental states, then at the level of the distinctively mental properties of those states. However, it should be of mention, that properties can be more complicated, for example, in the sentence, ‘Walter is married to Julie’, we are attributing to Walter the property of being married, and unlike the property of Walter is bald. Consider the sentence: ‘Walter is bearded’. The word ‘Walter’ in this sentence is a bit of language ~ a name of some individual human being ~ and more some would be tempted to confuse the word with what it names. Consider the expression ‘is bald’, this too is a bit of language ~ philosophers call it a ‘predicate’ ~ and it brings to our attention some property or feature which, if the sentence is true. Is possessed by Walter? Understood in this way, a property is not its self linguist though it is expressed, or conveyed by something that is, namely a predicate. What might be said that a property is a real feature of the word, and that it should be contrasted just as sharply with any predicates we use to express it as the name ‘Walter’ is contrasted with the person himself. Controversially, just what sort of ontological status should be accorded to properties by describing ‘anomalous monism’, ~ while it is conceivably given to a better understanding the similarity with the American philosopher Herbert Donald Davidson (1917-2003), wherefore he adopts a position that explicitly repudiates reductive physicalism, yet purports to be a version of materialism, nonetheless, Davidson holds that although token mental evident states are identical to those of physical events and states ~ mental ‘types’ -, i.e., kinds, and/or properties ~ are neither to, nor nomically co-existensive with, physical types. In other words, his argument for this position relies largely on the contention that the correct assignment of mental a actionable properties to a person is always a holistic matter, involving a global, temporally diachronic, ‘intentional interpretation’ of the person. But as many philosophers have in effect pointed out, accommodating claims of materialism evidently requires more than just repercussions of mental/physical identities. Mentalistic explanation presupposes not merely that metal events are causes but also that they have causal/explanatory relevance as mental -, i.e., relevance insofar as they fall under mental kinds or types. Nonetheless, Davidson’s position, which denies there are strict psychological or psychological laws, can accommodate the causal/explanation relevance of the mental quo mentally: If to ‘epiphenomenalism’ with respect to mental properties.

But the idea that the mental is in some respect non-physical cannot be assumed without argument. Plainly, the distinctively mental properties of the mental states are unlikely any other properties we know about. Only mental states have properties that are at all like the qualitative properties that anything like the intentional properties of thoughts and desires. However, this does not show that the mental properties are not physical properties, not, but all physical properties like the standard states: So, mental properties might still be special kinds of physical properties. It is question beginning to assume otherwise. The doctrine that the mental properties is simply an expression of the Cartesian doctrine that the mental is automatically non-physical.

It is sometimes held that properties should count as physical properties only if they can be defined using the terms of physics. This to far to restrictively. Nobody would hold that to reduce biology to physics, for example, we must define all biological properties using only terms that occur in physics. And even putting ‘reduction’ aside, in certain biological properties could have been defined, that would not mean that those properties were in n way non-physical. The sense of ‘physical’ that is relevant that is of its situation it must be broad enough to include not only biological properties, but also most common-sense, macroscopic properties. Bodily states are uncontroversially physical in the relevant way. So, we can recast the identity theory as asserting that mental states are identical with bodily state.

In the course of reaching conclusions about the origin and limits of knowledge, Locke had occasioned in concerning himself with topics which are of philosophical interest in themselves. On of these is the question of identity, which includes, more specifically, the question of personal identity: What are the criteria by which a person at one time is numerically the same person as a person encountering of time? Locke points out whether ‘this is what was here before, it matters what kind of thing ‘this’ is meant to be. If ‘this’ is meant as a mass of matter then it is what was before so long as it consists of the same material panicles, but if it is meant as a living body then its considering of the same particles does mot matter and the case is different. ‘A colt grown up to a horse, sometimes fat, sometimes lean, is all the while the same horse, though . . . there may be a manifest change of the parts. So, when we think about personal identity, we need to be clear about a distinction between two things which ‘the ordinary way of speaking runs together’ ~ the idea of ‘man’ and the idea of ‘person’. As with any other animal, the identity of a man consists ‘in nothing but a participation of the same continued life, by constantly fleeting particles of matter, in succession initially united to the same organized body, however, the idea of a person is not that of a living body of a certain kind. A person is a ‘thinking’. ‘intelligent being, which has some sorts of reflection and such a being ‘will be the same self as far as the same consciousness can extend to action past or to come’. Locke is at pains to argue that this continuity of self-consciousness does not necessarily involve the continuity of some immaterial substance, in the way that Descartes had held, for we all know, says Locke, consciousness and thought may be powers which can be possessed by ‘systems of matter fitly disposed’, and even if this is not so the question of the identity of a person is not the same as the question of the identity of an ‘immaterial; substance’. For just as the identity of as horse can be preserved through changes of matter and depended not on the identity of a continued material substance of its unity of one continued life. So the identity of a person does not depend on the continuity of a immaterial substance. The unity of one’s continued consciousness does not depend on its being ‘annexed’ only to one individual substance, [and not] . . . continued in a succession of several substances. For Lock e, then, personal identity consists in an identity of consciousness, and not in the identity of some substance whose essence it is to be conscious

Casual mechanisms or connections of meaning will help to take a historical route, and focus on the terms in which analytical philosophers of mind began to discuss seriously psychoanalytic explanation. These were provided by the long-standing and presently unconcluded debate over cause and meaning in psychoanalysis.

It is not hard to see why psychoanalysis should be viewed in terms of cause and meaning. On the one hand, Freud’s theories introduce a panoply of concepts which appear to characterize mental processes as mechanical and non-meaningful. Included are Freud’s neurological model of the mind, as outlined in his ‘Project or a Scientific Psychology’, more broadly, his ‘economic’ description of the mental, as having properties of force or energy, e.g., as ‘cathexing’ objects: And his account in the mechanism of repression. So it would seem that psychoanalytic explanation employs terms logically at variance with those of ordinary, common-sense psychology, where mechanisms do not play a central role. Bu t on the other hand, and equally striking, there is the fact that psychoanalysis proceeds through interpretation and engages on a relentless search for meaningful connections in mental life ~ something that even a superficial examination of the Interpretation of Dreams, or The Psychopathology of Everyday Life, cannot fail to impress upon one. Psychoanalytic interpretation adduces meaningful connections between disparate and often apparently dissociated mental and behavioural phenomena, directed by the goal of ‘thematic coherence’. Of giving mental life the sort of unity that we find in a work of art or cogent narrative. In this respect, psychoanalysis would seem to adopt as its central plank the most salient feature of ordinary psychology, its insistence e on relating actions to reason for them through contentful characterizations of each that make their connection seem rational, or intelligible: A goal that seems remote from anything found in the physical sciences.

The application to psychoanalysis of the perspective afforded by the cause-meaning debate can also be seen as a natural consequence of another factor, namely the semi-paradoxical nature of psychoanalysis’ explananda. With respect to all irrational phenomena, something like a paradox arises. Irrationality involves a failure of a rational connectedness and hence of meaningfulness, and so, if it is to have an explanation of any kind, relations that are non-meaningful are causal appear to be needed. And, yet, as observed above, it would seem that, in offering explanations for irrationality ~ plugging the ‘gaps’ in consciousness ~ what psychoanalytic explanation hinges on is precisely the postulation of further, although non-apparent connections of meaning.

For these two reasons, then ~ the logical heterogeneity of its explanation and the ambiguous status of its explananda ~ it may seem that an examination in terms of the concepts of cause and meaning will provide the key to a philosophical elucidation of psychoanalysis. The possible views of psychoanalytic explanation that may result from such an examination can be arranged along two dimensions. (1) Psychoanalytic explanation may then be viewed after reconstruction, as either causal and non-meaningful, or meaningful and non-causal, or as comprising both meaningful and causal elements, in various combinations. Psychoanalytic explanation then may be viewed, on each of these reconstructions, as either licensed or invalidated depending one’s view of the logical nature of psychology.

So, for instance, some philosophical discussion infer that psychoanalytic explanation is void, simple on the grounds that it is committed to causality in psychology. On another, opposed view, it is the virtue of psychoanalytic explanation that it imputes causal relations, since only causal relations can be relevant to explaining the failures of meaningful psychological connections. On yet another view, it is psychoanalysis’ commitment to meaning which is its great fault: It s held that the stories that psychoanalysis tries to tell do not really, on examination, explain successfully. And so on.

It is fair to say that the debates between these various positions fail to establish anything definite about psychoanalytic explanation. There are two reasons for this. First, there are several different strands in Freud’s whitings, each of which may be drawn on, apparently conclusively, in support of each alternative reconstruction. Secondly, preoccupation with a wholly general problem in the philosophy of mind, that of cause and meaning, distracts attention from the distinguishing features of psychoanalytic explanation. At this point, and in order to prepare the way for a plausible reconstruction of psychoanalytic explanation. It is appropriate to take a step back, and take a fresh look at the cause-meaning issue in the philosophy of psychoanalysis.

Suppose, first, that some sort of cause-meaning compatibilism ~ such as that of the American philosopher Donald Davidson (1917-2003) -, hold for ordinary psychology, on this view, psychological explanation requires some sort of parallelism of causal and meaningful connections, grounded in the idea that psychological properties play causal roles determined by their content. Nothing in psychoanalytic explanation is inconsistent with this picture: After his abandonment of the early ‘Project’. Freud exceptionlessly viewed psychology as autonomous relative to neurophysiology, and at the same time as congruent with a broadly naturalistic world-view. ‘Naturalism’ is often used interchangeably with ‘physicalism’ and ‘materialism’, though each of these hints at specific doctrines. Thus, ‘physicalism’ suggests that, among the natural sciences, there be something especially fundamental about physics. And ‘materialism’ has connotations going back to eighteenth-and-nineteenth-century views of the world as essentially made of material particles whose behaviour is fundamental for explaining everything else. Moreover, ‘naturalism’ with respect to some realm is the view that everything that exists in that realm, and all those events that take place in it, are empirically accessible features of the world. Sometimes naturalism is taken to my that some realm can be in principle understood by appeal to the laws and theories of the natural sciences, but one must be careful as sine naturalism does not by itself imply anything about reduction. Historically, ‘natural’ contrasts with ‘supernatural’, but in the context of contemporary philosophy of mind where debate centres around the possibility of explaining mental phenomena as part of the natural order, it is the non-natural rather than the supernatural that is the contrasting notion. The naturalist holds that they can be so explained, while the opponent of naturalism thinks otherwise, though it is not intended that opposition to naturalism commits one to anything supernatural. Nonetheless, one should not take naturalism in regard as committing one to any sort of reductive explanation of that realm, and there are such commitments in the use of ‘physicalism’ and ‘materialism’.

If psychoanalytic explanation gives the impression that it imputes bare, meaning-free causality, this results from attending to only half the story, and misunderstanding what psychoanalysis means when it talks of psychological mechanisms. The economic descriptions of mental processes that psychoanalysis provides are never replacements for, but themselves always presuppose, characterizations of mental processes in terms of meaning. Mechanisms in psychoanalytic context are simply processes whose operation cannot be reconstructed as instances of rational functioning (they are what we might by preference call mental activities, by contrast with action) Psychoanalytic explanation’s postulation of mechanisms should not therefore be regarded as a regrettable and expugnable incursion of scientism into Freud’s thought, as is often claimed.

Suppose, alternatively, that Hermeneuticists such as Habermas ~ who follow Dilthey beings as a interpretative practice to which the concepts of the physical sciences. Are given ~ are correct in thinking that connections of meaning are misrepresented through being described as causal? Again, this does not impact negatively o psychoanalytic explanation since, as just argued, psychoanalytic explanations nowhere impute meaning-free causation. Nothing is lost for psychoanalytic explanation I causation is excised from the psychological picture.

The conclusion must be that psychoanalytic explanation is at bottom indifferent to the general meaning-cause issue. The core of psychoanalysis consists in its tracing of meaningful connections with no greater or lesser commitment to causality than is involved in ordinary psychology. (Which helps to set the stage ~ pending appropriate clinical validation ~ for psychoanalysis to claim as much truth for its explanation as ordinary psychology?). Also, the true key to psychoanalytic explanation, its attribution of special kinds of mental states, not recognized in ordinary psychology, whose relations to one another do not have the form of patterns of inference or practical reasoning.

In the light of this, it is easy to understand why some compatibilities and Hermeneuticists assert that their own view of psychology is uniquely consistent with psychoanalytic explanation. Compatibilities are right to think that, in order to provide for psychoanalytic explanation, it is necessary to allow mental connections that are unlike the connections of reasons to the actions that they rationalize, or to the beliefs that they support: And, that, in outlining such connections, psychoanalytic explanation must outstrip the resources of ordinary psychology, which does attempt to force as much as possible into the mould of practical reasoning. Hermeneuticists, for their part, are right to think that it would be futile to postulate connections which were nominally psychological but that characterized in terms of meaning, and that psychoanalytic explanation does not respond to the ‘paradox’ of irrationality by abandoning the search for meaningful connections.

Compatibilities are, however, wrong to think that non-rational but meaningful connections require the psychological order to be conceived as a causal order. The Hermeneuticists is free to postulate psychological connections that are determined by meaning but not by rationality: It is coherent to suppose that there are connections of meaning that are not -bona fide- rational connections, without these being causal. Meaningfulness is a broader concept than rationality. (Sometimes this thought has been expressed, though not helpful, by saying that Freud discovered the existence of ‘neurotic rationality.) Despite the fact that an assumption of rationality is doubtless necessary to make sense of behaviour in general. It does not need to be brought into play in making sense of each instance of behaviour. Hermeneuticists, in turn, are wrong to think that the compatibility view psychology as causal signals a confusion of meaning with causality or that it must lead to compatibilism to deny that there is any qualitative difference between rational and irrational psychological connections.

All the same, the last two decades have been an intermittent interval through which times’ extraordinary changes, placing an encouraging well-situated plot in the psychology of the sciences. ‘Cognitive psychology’, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level processing, has become ~ perhaps, the ~ dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour.

The relationship between physical behaviour and agential behaviour is controversial. On some views, all ‘actions’ are identical; to physical changes in the subjects body, however, some kinds of physical behaviour, such as ‘reflexes’, are uncontroversially not kinds of agential behaviour. On others, a subject’s action must involve some physical change, but it is not identical to it.

Both physical and agential behaviours could be understood in the widest sense. Anything a person can do ~ even calculating in his head, for instance ~ could be regarded as agential behaviour. Likewise, any physical change in a person’s body ~ even the firing of a certain neuron, for instance ~ could be regarded as physical behaviour.

Of course, to claim that the mind is ‘nothing over and above’ such-and-such kinds of behaviour, construed as either physical or agential behaviour in the widest sense, is not necessarily to be a behaviourist. The theory that the mind is a series of volitional acts ~ a view close to the idealist position of George Berkeley (1685-1753) ~ and the theory that the mind is a certain configuration of neuronal events, while both controversial, are not forms of behaviourism.

Awaiting, right along side of an approaching account for which anomalous monism may take on or upon itself is the view that there is only one kind of substance underlying all others, changing and processes. It is generally used in contrast to ‘dualism’, though one can also think of it as denying what might be called ‘pluralism’ ~ a view often associated with Aristotle which claims that there are a number of substances, as the corpses of times generations have let it be known. Against the background of modern science, monism is usually understood to be a form of ‘materialism’ or ‘physicalism’. That is, the fundamental properties of matter and energy as described by physics are counted the only properties there are.

The position in the philosophy of mind known as ‘anomalous monism’ has its historical origins in the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804), but is universally identified with the American philosopher Herbert Donald Davidson (1917-2003), and it was he who coined the term. Davidson has maintained that one can be a monist ~ indeed, a physicalist ~ about the fundamental nature of things and events, while also asserting that there can be no full ‘reduction’ of the mental to the physical. (This is sometimes expressed by saying that there can be an ontological, though not a conceptual reduction.) Davidson thinks that complete knowledge of the brain and any related neurophysiological systems that support the mind’s activities would not in themselves be knowledge of such things as belief, desire, experience and the rest of mentalistic generativist of thoughts. This is not because he thinks that the mind is somehow a separate kind of existence: Anomalous monism is after all monism. Rather, it is because the nature of mental phenomena rules out a priori that there will be law-like regularities connecting mental phenomena and physical events in the brain, and, without such laws, there is no real hope of explaining the mental via the physical structure of the brain.

All and all, one central goal of the philosophy of science is to provided explicit and systematic accounts of the theories and explanatory strategies explored in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts involved in one or another science. in the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and thereby has been a great deal of work on the structure of evolutionary theory and on such crucial concepts. If concepts of the simple (observational) sorts were internal physical structures that had, in this sense, an information-carrying function, a function they acquired during learning, then instances of these structure types would have a content that (like a belief) could be either true or false. In that of ant information-carrying structure carries all kinds of information if, for example, it carries information ‘A’, it must also carry the information that ‘A’ or ‘B’. Conceivably, the process of learning is supposed to be a process in which a single piece of this information is selected for special treatment, thereby becoming the semantic content ~ the meaning ~ of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their flashing lights, and so forth ~ representations of the conditions in the world in which we are interested, so learning converts neural states that carry information ~ ‘pointer readings’ in the head, so to speak ~ in structures that have the function of providing some vital piece of information they carry when this process occurs in the ordinary course of learning, the functions in question develop naturally. They do not, as do the functions of instruments and artefacts, depends on the intentions, beliefs, and attitudes of users. We do not give brain structure these functions. They get it by themselves, in some natural way, either (in the case of the senses) from their selectional history or (in the case of thought) from individual learning. The result is a network of internal representations that have (in different ways) the power representation, of experience and belief.

To understand that this approach to ‘thought’ and ‘belief’, the approach that conceives of them as forms of internal representation, is not a version of ‘functionalism’ ~ at least, not if this dely held theory is understood, as it is often, as a theory that identifies mental properties with functional properties. For functional properties have to do with the way something, is, in fact, behaves, with its syndrome of typical causes and effects. An informational model of belief, in order to account for misrepresentation, a problem with which a preliminary way that in both need something more than a structure that provided information. It needs something having that as its function. It needs something that is supposed to provide information. As Sober (1985) comments for an account of the mind we need functionalism with the function, the ‘teleological’, is put back in it.

Philosophers need not (and typically do not) assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of he theories, concepts and explanatory strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

Cognitive psychology is in many ways a curious and puzzling science. Many of the theories put forward by cognitive psychologists make use of a family of ‘intentional’ concepts ~ like believing that ‘, desiring that ‘q’, and representing ‘r’ ~ which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

It is characteristic of dialectic awareness that discussions of intentionality appeared as the paradigm cases discussed which are usually beliefs or sometimes beliefs and desires, however, the biologically most basic forms of intentionality are in perception and in intentional action. These also have certain formal features which are not common to beliefs and desire. Consider a case of perceptual experience. Suppose that I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there be a hand in front of my face. Thus far, the condition of satisfaction is the same as the belief than there is a hand in front of my face. But with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction forms a part. We can represent this in our acceptation of the form. S(p), such as:

Visual experience (that there is a hand in front of face

and the fact that there is a hand in front of my face

is causing this very experience.)

Furthermore, visual experiences have a kind of conscious immediacy not characterised of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are themselves forms of consciousness.

People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, sensational, are said to result in mental states which represent (or sometimes misrepresent) one or as another aspect of the cognitive agent’s environment. Other theorists have offered analogous acts, if differing in detail, perhaps, the most crucial idea in all of this is the one about representations. There is perhaps a sense in which what happens at, say, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what produces that stimulation, and thus, some kind of representation of the objects of perception. Or so it may seem, if one attempts to describe the relation between the structure and characteristic of the object of perception and the structure and nature of the retinal processes. One might say that the nature of that relation is such as to provide information about the part of the world perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s truck provide information of its age. This is because there is an appropriate causal relation between the things which make it impossible for it to be a matter of chance. Subsequently processing can then be thought to be one carried out on what is provided in the representations in question.

However, if there are such representations, they are not representations for the perceiver, it is the thought that perception involves representations of that kind which produced the old, and now largely discredited philosophical theories of perception which suggested that perception be a matter, primarily, of an apprehension of mental states of some kind, e.g., sense-data, which are representatives of perceptual objects, either by being caused by them or in being in some way constitutive of them. Also, if it be said that the idea of information so invoked indicates that there is a sense in which the processes of stimulation can be said to have content, but a non-conceptual content, distinct from the content provided by the subsumption of what is perceived under concepts. It must be emphasised that, that content is not one of the perceivers. What the information-processing story provides, at best, a more adequate categorization than previously available of the causal processes involved. That may be important, but more should not be claimed for it than there is. If in perception is a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is because there is presupposed in that perception the possession of concepts of objects, and more particular, a concept of space and how objects occupy space.

It is, that, nonetheless, cognitive psychologists occasionally say a bit about the nature of intentional concepts and the nature of intentional concepts and the explanations that exploit them. Their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile grounds for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Alan Jerry Fodor’s (1935- ), The Language of Thought (1975) was a pioneering study in the genre on the field. Philosophers have, also, done important and widely discussed work in what might be called the ‘descriptive philosophy’ or ‘cognitive psychology’.

These philosophical accounts of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists actually produce, then the philosophers have just got it wrong. There is, however, a very different way in which philosopher’s have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two situated consideration are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps e easiest way to make the point about ‘supervenience is to use a thought experiment of the sort originally proposed by the American philosopher Hilary Putnam (1926- ). Suppose that in some distant corner of the universe there is a planet, Twin Earth, which is very similar to our own planet. On Twin Earth, there is a person who is an atom for atom replicas of J.F. Kennedy. Now the President J.F. Kennedy, who lives on Earth believe s that Rev. Martin Luther King Jr. was born in Tennessee. If you asked him. ‘Was the Rev. Martin Luther King Jr. born in Tennessee, In all probability the answer would either or not it is yes or no. twin, Kennedy would respond in the same way, but it is not because he believes that our Rev. Martin Luther King Jr.? Was, as, perhaps, very much in question of what is true or false? His beliefs are about Twin-Luther, and that Twin -Luther was certainly not born in Tennessee, and thus, that J.F. Kennedy’s belief is true while Twin-Kennedy’s is false. What all this is supposed to show is that two people, perhaps on opposite polarities of justice, or justice as drawn on or upon human rights, can share all their physiological properties without sharing all their intentional properties. To turn this into a problem for cognitive psychology, two additional premises are needed. The first is that cognitive psychology attempts to explain behaviour by appeal to people’s intentional properties. The second, is that psychological explanations should not appeal to properties that fall to supervene on an organism’s physiology. (Variations on this theme can be found in the American philosopher Allen Jerry Fodor (1987)).

The thesis that the mental is supervenient on the physical ~ roughly, the claim that the mental character of a wholly determinant of its rendering adaptation of its physical nature ~ has played a key role in the formulation of some influential positions of the ‘mind-body’ problem. In particular versions of non-reductive ‘physicalism’, and has evoked in arguments about the mental, and has been used to devise solutions to some central problems about the mind ~ for example, the problem of mental causation.

The idea of supervenience applies to one but not to the other, that this, there could be no difference in a moral respect without a difference in some descriptive, or non-moral respect evidently, the idea generalized so as to apply to any two sets of properties (to secure greater generality it is more convenient to speak of properties that predicates). The American philosopher Donald Herbert Davidson (1970), was perhaps first to introduce supervenience into the rhetoric discharging into discussions of the mind-body problem, when he wrote ‘ . . . mental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respectfulness, or that an object cannot alter in some metal deferential submission without altering in some physical regard. Following, the British philosopher George Edward Moore (1873-1958) and the English moral philosopher Richard Mervyn Hare (1919-2003), from whom he avowedly borrowed the idea of supervenience. Donald Herbert Davidson, went on to assert that supervenience in this sense is consistent with the irreducibility of the supervened to their ‘subvenient’, or ‘base’ properties. Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ‘

Thus, three ideas have purposively come to be closely associated with supervenience: (1) Property convariation, (if two things are indiscernible in base properties they must be indiscernible in supervenient properties). (2) Dependence, (supervenient properties are dependent on, or determined by, their subservient bases) and (3) non-reducibility (property convariation and dependence involved in supervenience can obtain even if supervenient properties are not reducible to their base properties.)

Nonetheless, in at least, for the moment, supervenience of the mental ~ in the form of strong supervenience, or, at least global supervenience ~ is arguably a minimum commitment to physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation ~ that is, as a solution to the mind-body problem?

It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence in either way. However, if we take to consider the ethical naturalist intuitivistic will say that the supervenience, and the dependence, for which is a brute fact you discern through moral intuition: And the prescriptivist will attribute the supervenience to some form of consistency requirements on the language of evaluation and prescription. And distinct from all of these is Mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its pats. What all this shows, is that there is no single type of dependence relation common to all cases of supervenience, supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, Mereological dependence, and so forth.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is that to explicate mind-body supervenience as a special case of Mereological supervenience ~ that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is meta-physically sui generis and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituent organs, tissues, and so forth, are organized and function. This more specific supervenience thesis may be a serious theory of the mind-body relation that can compete for the classic options in the field.

On this topic, as with many topics in philosophy, there is a distinction to be made between (1) certain vague, partially inchoate, pre-theoretic ideas and beliefs about the matter at hand, and (2) certain more precise, more explicit, doctrines or theses that are taken to articulate or explicate those pre-theoretic ideas and beliefs. There are various potential ways of precisifying our pre-theoretic conception of a physicalist or materialist account of mentality, and the question of how best to do so is itself a matter for ongoing, dialectic, philosophical inquiry.

The view concerns, in the first instance, at least, the question of how we, as ordinary human beings, in fact go about ascribing beliefs to one another. The idea is that we do this on the basis of our knowledge of a common-sense theory of psychology. The theory is not held to consist in a collection of grandmotherly saying, such as ‘once bitten, twice shy’. Rather it consists in a body of generalizations relating psychological states to each other to input from the environment, and to actions. Such may be founded on or upon the grounds that show or include the following:

(1) (x)(p)(if x fears that p, then x desires that not-p.)

(2) (x)(p)(if x hopes that p and [✸] hopes that p and

[✸] discovers that p, then [✸] is pleased that p.)

(3) (x)(p)(q) (If x believes that p and [✸] believes that

if p, then q, barring confusion, distraction and so

forth [✸] believes that q.)

(4) (x)(p)(q) (If x desires that p and x believes that if q then

p, and x is able to bring it about that q, then, barring

conflict desires or preferred strategies, x brings it about

that q.)

All of these generalizations should be understood as containing ceteris paribus clauses. (1) For example, applies most of the time, but variably. Adventurous types often enjoy the adrenal thrill produced by fear. This leads them, on occasion, to desire the very state of affairs that frightens them. Analogously, with (3). A subject who believes that ‘p’ nd believes that if ‘p’, then ‘q’. Would typically infer that ‘q?’. But certain atypical circumstances may intervene: Subjects may become confused or distracted, or they ma y find the prospect of ‘q’ so awful that they dare not allow themselves to believe it. The ceteris paribus nature of these generalizations is not usually considered to be problematic, since atypical circumstances are, of course, atypical, and the generalizations are applicable most of the time.

We apply this psychological theory to make inference about people’s beliefs, desires and so forth. If, for example, we know that Julie believes that if she is to be at the airport at four, then she should get a taxi at half past two, and she believes that she is to be at the airport at four, then we will predict, using (3), that Julie will infer that she should get a taxi at half past two.

The Theory-Theory, as it is called, is an empirical theory addressing the question of our actual knowledge of beliefs. Taken in its purest form if addressed both first and third-person knowledge: We know about our own beliefs and those of others in the same way, by application of common-sense psychological theory in both cases. However, it is not very plausible to hold that we always ~ or, indeed usually ~ know our own beliefs by way of theoretical inference. Since it is an empirical theory concerning one of our cognitive abilities, the Theory-Theory is open to psychological scrutiny. Various issues of the hypothesized common-sense psychological theory, we need to know whether it is known consciously or unconsciously. Nevertheless, research has revealed that three-year-old children are reasonably god at inferring the beliefs of others on the basis of actions, and at predicting actions on the basis of beliefs that others are known to possess. However, there is one area in which three-year-old’s psychological reasoning differs markedly from adults. Tests of the sorts are rationalized in such that: ‘False Belief Tests’, reveal largely consistent results. Three-year-old subjects are witness to the scenario about the child, Billy, sees his mother place some biscuits in a biscuit tin. Billy then goes out to play, and, unseen by him, his mother removes the biscuit from the tin and places them in a jar, which is then hidden in a cupboard. When asked, ‘Where will Billy look for the biscuits’? The majority of three-year-olds answer that Billy will look in the jar in the cupboard ~ where the biscuits actually are, than where Billy saw them being placed. On being asked ‘Where does Billy think the biscuits are’? They again, tend to answer ‘in the cupboard’, rather than ‘in the jar’. Three-year-olds thus, appear to have some difficulty attributing false beliefs to others in case in which it would be natural for adults to do so. However, it appears that three-year-olds are lacking the idea of false beliefs in general, nor does it dissembles, in that they struggle with attributing false beliefs in other kinds of the situation. For example, they have little trouble distinguishing between dreams and play, on the one hand, and true beliefs or claims on the other. By the age of four and a half years, most children pass the False Belief Tests fairly consistently. There is yet no general accepted theory of why three-year-olds fare so badly with the false beliefs tests, nor of what it reveals about their conception of beliefs.

Recently some philosophers and psychologists have put forward what they take to be an alternative to the Theory-Theory: However, the challenge does not end there. We need also to consider the vital element of making appropriate adjustments for differences between one’s own psychological states and those of the other. Nevertheless, it is implausible to think in every such case of simulation, yet alone will provide the resolving obtainability to achieve.

The evaluation of the behavioural manifestations of belief, desires, and intentions are enormously varied, every bit as suggested. When we move away from perceptual beliefs, the links with behaviour are intractable and indirect: The expectation I form on the basis of a particular belief reflects the influence of numerous other opinions, my actions are formed by the totality of my preferences and all those opinions which have a bearing on or upon them. The causal processes that produce my beliefs reflect my opinions about those processes, about their reliability and the interference to which they are subject. Thus, behaviour justifies the ascription of a particular belief only by helping to warrant a more inclusive interpretation of the overall cognitive position of the individual in question. Psychological descriptions, like translation, is a ‘holistic’ business. And once this is taken into account, it is all the less likely that a common physical trait will be found which grounds all instances of the same belief. The ways in which all of our propositional altitudes interact in the production of behaviour reinforce the anomalous character of the mental and render any sort of reduction of the mental to the physical impossibilities. Such is not meant as a practical procedure, it can, however, generalize on this so that interpretation and merely translation is at issue, has made this notion central to methods of accounting responsibilities of the mind.

Theory and Theory-Theory are two, as many think competing, views of the nature of our common-sense, propositional attitude explanations of action. For example, when we say that our neighbour cut down his apple tree because he believed that it was ruining his patio and did not want it ruined, we are offering a typically common-sense explanation of his action in terms of his beliefs and desires. But, even though wholly familiar, it is not clear what kind of explanation is at issue. Connected of one view, is the attribution of beliefs and desires that are taken as the application to actions of a theory which, in its informal way, functions very much like theoretical explanations in science. This is known as the ‘theory-theory’ of every day psychological explanation. In contrast, it has been argued that our propositional attributes are not theoretical claims do much as reports of a kind of ‘simulation’. On such a ‘simulation theory’ of the matter, we decide what our neighbour will do (and thereby why he did so) by imagining himself in his position and deciding what we would do.

The Simulation Theorist should probably concede that simulations need to be backed up by the independent means of discovering the psychological states of others. But they need not concede that these independent means take the form of a theory. Rather, they might suggest that we can get by with some rules of thumb, or straightforward inductive reasoning of a general kind.

A second and related difficulty with the Simulation Theory concerns our capacity to attribute beliefs that are too alien to be easily simulated: Beliefs of small children, or psychotics, or bizarre beliefs deeply suppressed in the unconscious latencies. The small child refuses to sleep in the dark: He is afraid that the Wicked Witch will steal him away. No matter how many adjustments we make, it may be hard for mature adults to get their own psychological processes, even in pretended play, to mimic the production of such belief. For the Theory-Theory alien beliefs are not particularly problematic: So long as they fit into the basic generalizations of the theory, they will be inferrable from the evidence. Thus, the Theory-Theory can account better for our ability to discover more bizarre and alien beliefs than can the Simulation Theory.

The Theory-Theory and the Simulation Theory are not the only proposals about knowledge of belief. A third view has its origins in the Austrian philosopher Ludwig Wittgenstein (1889-1951). On this view both the Theory and Simulation Theories attribute too much psychologizing to our common-sense psychology. Knowledge of other minds is, according to this alternative picture, more observational in nature. Beliefs, desires, feelings are made manifest to us in the speech and other actions of those with whom we share a language and way of life. When someone says. ‘Its going to rain’ and takes his umbrella from his bag. It is immediately clear to us that he believes it is going to rain. In order to possess an intellectual hold of we neither theorize nor simulate: We just perceive, of course, this is not straightforward visual perception of the sort that we use to see the umbrella. But it is like visual perception in that it provides immediate and non-inferential awareness of its objects. We might call this the ‘Observational Theory’.

The Observational Theory does not seem to accord very well with the fact that we frequently do have to indulge in a fair amount of psychologizing to find in what others believe. It is clear that any given action might be the upshot of any number of different psychological attitudes. This applies even in the simplest cases. For example, because one’s friend is suspended from a dark balloon near a beehive, with the intention of stealing honey. This idea to make the bees behave that it is going to rain and therefore believe that the balloon as a dark cloud, and therefore pay no attention to it, and so fail to notice one’s dangling friend. Given this sort of possibility, the observer would surely be rash immediately to judge that the agent believes that it is going to rain. Rather, they would need to determine ~ perhaps, by theory, perhaps by simulation ~ which of the various clusters of mental states that might have led to the action, actually did so. This would involve bringing in further knowledge of the agent, the background circumstances and so forth. It is hard to see how the sort of complex mental process involved in this sort of psychological reflection could be assimilated to any kind of observation.

The attributions of intentionality that depend on optimality or rationality are interpretations of the assumptive phenomena ~ a ‘heuristic overlay’ (1969), describing an inescapable idealized ‘real pattern’. Like such abstractions, as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have noo independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if ~ most importantly ~ rival intentional interpretations arose that did equally well at rationalizing the history of behaviour f an entity. Orman van William Quine (1908-2000), the most influential American philosopher of the latter half of the 20th century, whose thesis on the indeterminacy of radical translation carries all the way in the thesis of the indeterminacy of radical interpretation of mental states and processes.

The fact that cases of radical indeterminacy, though possible in principle, are vanishingly unlikely ever to comfort us in small solacing refuge and shelter, apparently this idea is deeply counter-intuitive to many philosophers, who have hankered for more ‘realistic’ doctrines. There are two different strands of ‘realism’ that in the attempt to undermine are such:

(1) Realism about the entities purportedly described by four

everyday mentalistic discourse ~ what I dubbed as

folk-psychology, such as beliefs, desires, pains, the self.

(2) Realism about content itself ~ the idea that there have

to be events or entities that really have intentionality

(as opposed to the events and entities that only have as

if they had intentionality).

The tenet indicated by (1) rests of what is fatigue, what bodily states or events are so fatiguing, that they are identical with, and so forth. This is a confusion that calls for diplomacy, not philosophical discovery: The choice between an ‘eliminative materialism’ and an ‘identity theory’ of fatigues is not a matter of which ‘ism’ is right, but of which way of speaking is most apt to wean these misbegotten features of them as conceptual schemata.

Again, the tenet (2) my attack has been more indirect. The view that some philosophers, in that of a demand for content realism as an instance of a common philosophical mistake: Philosophers oftentimes manoeuvre themselves into a position from which they can see only two alternatives: Infinite regress versus some sort of ‘intrinsic’ foundation ~ a prime mover of one sort or another. For instance, it has seemed obvious that for some things to be valuable as means, other things must be intrinsically valuable ~ ends in themselves ~ otherwise we would be stuck with a vicious regress (or, having no beginning or end) of things valuable only that although some intentionality is ‘derived’ (the ‘aboutness’ of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is), unless some intentionality is ‘original’ and underived, there could be no derived intentionality.

There is always another alternative, namely, a finite regress that peters out without marked foundations or thresholds or essences. Here is an avoided paradox: Every mammal has a mammal for a mother ~ but, this implies an infinite genealogy of mammals, which cannot be the case. The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today’s mammals is secure without foundations.

The best instance of this theme is held to the idea that the way to explain the miraculous-seeming powers of an intelligent intentional system is to decompose it into hierarchically structured teams of ever more stupid intentional systems, ultimately discharging all intelligence-debts in a fabric of stupid mechanisms. Lycan (1981), has called this view ‘homuncular functionalism’. One may be tempted to ask: Are the subpersonal components ‘real’ intentional systems? At what point in the diminutions of prowess as we descend to simple neurons does ‘real’ intentionality disappear? Don’t ask. The reasons for regarding an individual neuron (or a thermostat) as a intentional system are unimpressive, but zero, and the security of our intentional attributions at the highest lowest-level of real intentionality. Another exploitation of the same idea is found in Elbow Room (1984): At what point in evolutionary history did real reason-appreciators real selves, make their appearance? Don’t ask ~ for the dame reason. Here is yet another, more fundamental versions of evolution can point in the early days of evolution can we speak of genuine function, genuine selection-for and not mere fortuitous preservation of entities that happen to have some self-replicative capacity? Don’t ask. Many of the more interesting and important features of our world have emerged, gradually, from a world that initially lacked them ~ function, intentionality, consciousness, morality, value ~ and it is a fool’s errand to try to identify a first or most-simple of an instance of the ‘real’ thin. It is for the same reason a mistake must exist to answer all the questions our system of cognitive content attribution permit us to ask. Tom says he has an older brother in Toronto and that he is an only child. What does he really believe? Could he really believe that he had a but if he also believed he was an only child? What is the ‘real’ content of his mental state? There is no reason to suppose there is a principled answer.

The most sweeping conclusion having drawn from this theory of content is that the large and well-regarded literature on ‘propositional attitudes’ (especially the debates over wide versus narrow content) is largely a disciplinary artefact of no long-term importance whatever, except perhaps, as history’s most slowly unwinding unintended reductio ad absurdum. Mostly, the disagreements explored in that literature cannot even be given an initial expression unless one takes on the assumption of an unsounded fundamentality of strong realism about content, and its constant companion, the idea of a ‘language of thought’ a system of mental representation that is decomposable into elements rather like terms, and large elements rather like sentences. The illusion, that this is plausible, or even inevitable, is particularly fostered by the philosophers’ normal tactic of working from examples of ‘believing-that-p’ that focuses attention on mental states that are directly or indirectly language-infected, such as believing that the shortest spy is a spy, or believing that snow is white. (Do polar bears believe that snow is white? In the way we do?) There are such states ~ in language-using human beings ~ but, they are not exemplary r foundational states of belief, needing a term for them. As, perhaps, in calling the term in need of, as they represent ‘opinions’. Opinions play a large, perhaps even decisive role in our concept of a person, but they are not paradigms of the sort of cognitive element to which one can assign content in the first instance. If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected states are more readily seen to be derived, less directly implicated in the explanation of behaviour, and the chief but an illicit source of plausibility of the doctrine of a language of thought. Postulating a language of thought is in any event a postponement of the central problems of content ascribed, not a necessary first step.

Our momentum, forces out the causal theories of epistemology, of what makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what causes the subject to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. For some proposed casual criteria for knowledge and justification are for us, to take under consideration.

Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right sort of causal connection to the fact that ‘p’. Such a criteria can be applied only to cases where the fact that ‘p’, a sort that can enter causal relations: This seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization. And proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.

For example, the forthright Australian materialist David Malet Armstrong (1973), proposed that a belief of the form ‘This (perceived) object is ‘F’ is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictate that, for any subject ‘x’ and perceived object ‘y’. If ‘x’ has those properties and believes that ‘y’ is ‘F’, then ‘y’ is ‘F’. Dretske (1981) offers a rather similar account in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that any tinted colour in things that look brownishly-tinted to you and brownishly-tinted things look of any tinted colour. If you fail to heed these results you have for thinking that your colour perception is awry and believe of a thing that look’s colour tinted to you that it is colour tinted, your belief will fail to b e justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as to be a completely reliable sign (or to carry the information) that the thing is tinted or found of some tinted discolouration.

One could fend off this sort of counter-example by simply adding to the causal condition the requirement that the belief be justified. But this enriched condition would still be insufficient. Suppose, for example, that in an experiment you are given a drug that in nearly all people (but not in you, as it happens) causes the aforementioned aberration in colour perception. The experimenter tells you that you are taken such a drug that says, ‘No, wait a minute, the pill you took was just a placebo’. But suppose further that this last ting the experimenter tells that you are false. Her telling you this gives you justification for believing of a thing that looks colour tinted or tinged in brownish tones, but in fact about this justification that is unknown to you (that the experimenter’s last statement was false) makes it the casse that your true belief is not knowledge even though it satisfies Armstrong’s causal condition.

Goldman (1986) has proposed an important different sort of causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that a ‘global’ and ‘locally’ reliable. It is global reliability of its propensity to cause true beliefs is sufficiently high. Local reliability had to do with whether the process would have produced a similar but false belief in certain counter-factual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge e does not require the fact believed to be causally related to the belief and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires, also for knowledge because justification is required for knowledge. What he requires for knowledge but suffices to say that it is not required for justification as local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counter-factual situation in which it is

The theory of relevant alternative is best understood as an attempt to accommodate two opposing strands in our thinking about knowledge. The first is that knowledge is an absolute concept. On one interpretation, tis means that the justification or evidence one must have an order to know a proposition ‘p’ must be sufficient to eliminate all the alternatives to ‘p’ (when an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’).

For knowledge requires only that elimination of the relevant alternatives. So the relevant alternatives view preservers both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The relevant alternative’s account of knowledge can be motivated by noting that other concepts exhibit the same logical structure e. two examples of this are the concepts ‘flat’ and the concept ‘empty’. Both appear to be absolute concepts ~ a space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of flat, there is a standard for what there is a standard for what counts as a bump and in the case of empty, there is a standard for what counts as a thing. We would not deny that a table is flat because a microscope reveals irregularities in its surface. Nor would we den y that a warehouse is empty because it contains particles of dust. To be flat is to be free of any relevant bumps. To be empty is to be devoid of all relevant things. Analogously, the relevant alternative’s theory says that to know a proposition is to have evidence that eliminates all relevant alternatives.

Some philosophers have argued that the relevant alternative’s theory of knowledge entails the falsity of the principle that set of known (by S) propositions in closed under known (by S) entailment, although others have disputed this however, this principle affirms the following conditional or the closure principle:

If ‘S’ knows ‘p’ and ‘S’ knows that ‘p’ entails ‘q’, then ‘S’ knows ‘q’.

According to the theory of relevant alternatives, we can know a proposition ‘p’, without knowing that some (non-relevant) alterative to ‘p’ is false. But, once an alternative ‘h’ to ‘p’ incompatible with ‘p’, then ‘p’ will trivially entail not-h. So it will be possible to know some proposition without knowing another proposition trivially entailed by it. For example, we can know that we see a zebra without knowing that it is not the case that we see a cleverly disguised mule (on the assumption that ‘ewe see a cleverly disguised mule’ is not a relevant alterative). This will involve a violation of the closure principle. This is an interesting consequence of the theory because the closure principle seems to many to be quite intuitive. In fact, we can view sceptical arguments as employing the closure principle as a premise, along with the premise that we do not know that the alternatives raised by the sceptic are false. From these two premisses, it follows (on the assumption that we see that the propositions we believe entail the falsity of sceptical alternatives) that we do not know the proposition we believe. For example, it follows from the closure principle and the fact that we do not know that we do not see a cleverly disguised mule, that we do not know that we see a zebra. We can view the relevant alternative’s theory as replying to the sceptical arguments by denying the closure principle.

What makes an alternative relevant? What standard do the alternatives rise by the sceptic fail to meet? These notoriously difficult to answer with any degree of precision or generality. This difficulty has led critics to view the theory as something being to obscurity. The problem can be illustrated though an example. Suppose Smith sees a barn and believes that he does, on the basis of very good perceptual evidence. When is the alternative that Smith sees a paper-maché replica relevant? If there are many such replicas in the immediate area, then this alternative can be relevant. In these circumstances, Smith fails to know that he sees a barn unless he knows that it is not the case that he sees a barn replica. Where no such replica exist, this alternative will not be relevant. Smith can know that he sees a barn without knowing that he does not see a barn replica.

This suggests that a criterion of relevance be something like probability conditional on Smith’s evidence and certain features of the circumstances. But which circumstances in particular do we count? Consider a case where we want the result that the barn replica alternative is clearly relevant, e.g., a case where the circumstances are such that there are numerous barn replicas in the area. Does the suggested criterion give us the result we wanted? The probability that Smith sees a barn replica given his evidence and his location to an area where there are many barn replicas is high. However, that same probability conditional on his evidence and his particular visual orientation toward a real barn is quite low. We want the probability to be conditional on features of the circumstances like the former bu t not on features of the circumstances like the latter. But how do we capture the difference in a general formulation?

How significant a problem is this for the theory of relevant alternatives? This depends on how we construe theory. If the theory is supposed to provide us with an analysis of knowledge, then the lack of precise criteria of relevance surely constitute a serious problem. However, if the theory is viewed instead as providing a response to sceptical arguments, it can be argued that the difficulty has little significance for the overall success of the theory.

What justifies the acceptance of a theory? Although particular versions of empiricism have met many criticisms, it remains attractive to look for an answer in some sort of empiricist terms: In terms, that is, of support by the available evidence. How else could objectivity of science be defended but by showing that its conclusions (and in particular its theoretical conclusion ~ those theories it presently accepts) are somehow legitimately based on agreed observational and experimental evidence? But, as is well known, theories in general pose a problem for empiricism.

Allowing the empiricist the assumption that there are observational statements whose truth-values can be inter-subjectively agreeing, and show the exploratory, non-demonstrative use of experiment in contemporary science. Yet philosophers identify experiments with observed results, and these with the testing of theory. They assume that observation provides an open window for the mind onto a world of natural facts and regularities, and that the main problem for the scientist is to establish the unique or the independence of a theoretical interpretation. Experiments merely enable the production of (true) observation statements. Shared, replicable observations are the basis for scientific consensus about an objective reality. It is clear that most scientific claims are genuinely theoretical: Nether themselves observational nor derivable deductively from observation statements (nor from inductive generalizations thereof). Accepting that there are phenomena that we have more or less diet access to, then, theories seem, at least when taken literally, to tell us about what is going on ‘underneath’ the observable, directly accessible phenomena on order to produce those phenomena. The accounts given by such theories of this trans-empirical reality, simply because it is trans-empirical, can never be established by data, nor even by the ‘natural’ inductive generalizations of our data. No amount of evidence about tracks in cloud chambers and the like, can deductively establish that those tracks are produced by ‘trans-observational’ electrons.

One response would, of course, be to invoke some strict empiricist account of meaning, insisting that talk of electrons and the like, is, in fact just shorthand for talks in cloud chambers and the like. This account, however, has few, if any, current defenders. But, if so, the empiricist must acknowledge that, if we take any presently accepted theory, then there must be alternatives, different theories (indefinitely many of them) which treat the evidence equally well ~ assuming that the only evidential criterion is the entailment of the correct observational results.

All the same, there is an easy general result as well: assuming that a theory is any deductively closed set of sentences, and assuming, with the empiricist that the language in which these sentences are expressed has two sorts of predicated (observational and theoretical), and, finally, assuming that the entailment of the evidence is only constraint on empirical adequacy, then there are always indefinitely many different theories which are equally empirically adequate in a language in which the two sets of predicates are differentiated. Consider the restricts if ‘T’ to quantifier-free sentences expressed purely in the observational vocabulary, then any conservative extension of that restricted set of T’s consequences back into the full vocabulary is a ‘theory’ co-empirically adequate with ~ entailing the same singular observational statements as ~ ‘T’. Unless veery special conditions apply (conditions which do not apply to any real scientific theory), then some of the empirically equivalent theories will formally contradict ‘T’. (A similar straightforward demonstration works for the currently more fashionable account of theories as sets of models.)

How can an empiricist, who rejects the claim that two empirically equivalent theories are thereby fully equivalent, explain why the particular theory ‘T’ that is, as a matter of fact, accepted in science is preferred these other possible theories ‘T’, with the same observational content? Obviously the answer must be ‘by bringing in further criteria beyond that of simply having the right observational consequence. Simplicity, coherence with other accepted these and unity are favourite contenders. There are notorious problems in formulating this criteria at all precisely: But suppose, for present purposes, that we have a strong enough intuitive grasp to operate usefully with them. What is the status of such further criteria?

The empiricist-instrumentalist position, newly adopted and sharply argued by van Fraassen, is that those further criteria are ‘pragmatic’ ~ that is, involved essential reference to us as ‘theory-users’. We happen tp prefer, for our own purposes, since, coherent, unified theories ~ but this is only a reflection of our preference es. It would be a mistake to think of those features supplying extra reasons to believe in the truth (or, approximate truth) of the theory that has them. Van Fraassen’s account differs from some standard instrumentalist-empiricist account in recognizing the extra content of a theory (beyond its directly observational content) as genuinely declarative, as consisting of true-or-false assertions about the hidden structure of the world. His account accepts that the extra content can neither be eliminated as a result of defining theoretical notions in observational terms, nor be properly regarded as only apparently declarative but in fact as simply a codification schemata. For van Fraassen, if a theory say that there are electrons, then the theory should be taken as meaning what it says ~ and this without any positivist divide debasing reinterpretations of the meaning that might make ‘There are electrons’ mere shorthand for some complicated set of statements about tracks in obscure chambers or the like.

In the case of contradictory but empirically equivalent theories, such as the theory T1 that ‘there are electrons’ and the theory T2 that ‘all the observable phenomena as if there are electrons but there are not ‘t’. Van Fraassen’s account entails that each has a truth-value, at most one of which is ‘true’, is that science need not to T2, but this need not mean that it is rational believe that it is more likely to be true (or otherwise appropriately connected with nature). As far as belief in the theory is belief but T2. The only belief involved in the acceptance of a theory is belief in the theorist’s empirical adequacy. To accept the quantum theory, for example, entails believing that it ‘saves the phenomena’ ~ all the (relevant) phenomena, but only the phenomena, theorists do ‘say more’ than can be checked empirically even in principle. What more they say may indeed be true, but acceptance of the theory does not involve belief in the truth of the ‘more’ that theorist say.

Preferences between theories that are empirically equivalent are accounted for, because acceptance involves more than belief: As well as this epistemic dimension, acceptance also has a pragmatic dimension. Simplicity, (relative) freedom from ads hoc assumptions, ‘unity’, and the like are genuine virtues that can supply good reasons to accept one theory than another, but they are pragmatic virtues, reflecting the way we happen to like to do science, rather than anything about the world. Simplicity to think that they do so: The rationality of science and of scientific practices can be in truth (or approximate truth) of accepted theories. Van Fraassen’s account conflicts with what many others see as very strong intuitions.

The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemologically justified for a given person to be cognitively accessible to that person, internal to his cognitive perceptive, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, so that they can be external to the believer’s cognitive perspective, beyond his knowingness. However, epistemologists often use the distinction between internalist and externalist theories of epistemic explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and a rather different way to accounts of belief and thought content. The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factors in order to be justified while a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately. But without the need for any change of position, new information, and so forth. Though the phrase ‘cognitively accessible’ suggests the weak interpretation, therein intuitive motivation for intentionalism, viz., the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, wherefore, it would require the strong interpretation.

Perhaps the clearest example of an internalist position would be a ‘foundationalist’ view according to which foundational beliefs pertain to immediately experienced states of mind other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a ‘coherentist’ view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessarily, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak versions) objects of objective awareness. Also, on this way of drawing the distinction, a hybrid view (like the ones already mentioned), according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirements for justification is roughly that the belief be produce d in a way or via a process that make it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have or likely to be true, but will, on such an account, nonetheless, be epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemological working within this tradition is likely to feel that the externalist, than offering a competing account on the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

Two general lines of argument are commonly advanced in favour of justificatory externalism. The first starts from the allegedly common-sensical premise that knowledge can be non-problematically ascribed to relativity unsophisticated adults, to young children and even to higher animals. It is then argued that such ascriptions would be untenable on the standard internalist accounts of epistemic justification (assuming that epistemic justification is a necessary condition for knowledge), since the beliefs and inferences involved in such accounts are too complicated and sophisticated to be plausibly ascribed to such subjects. Thus, only an externalist view can make sense of such common-sense ascriptions and this, on the presumption that common-sense is correct, constitutes a strong argument in favour of externalism. An internalist may respond by externalism. An internalist may respond by challenging the initial premise, arguing that such ascriptions of knowledge are exaggerated, while perhaps at the same time claiming that the cognitive situation of at least some of the subjects in question. Is less restricted than the argument claims? A quite different response would be to reject the assumption that epistemic justification is a necessary condition for knowledge, perhaps, by adopting an externalist account of knowledge, rather than justification, as those aforementioned.

The second general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible, non-sceptical solutions to the classical problems of epistemology. In striking contrast, however, such problems are in general easily solvable on an externalist view. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is likely to be remedied in the future, we have good reason to think that some externalist view is true. Obviously the cogency of this argument depends on the plausibility of the two assumptions just noted. An internalist can reply, first, that it is not obvious that internalist epistemology is doomed to failure, that the explanation for the present lack of success may be the extreme difficulty of the problems in question. Secondly, it can be argued that most of even all of the appeal of the assumption that the various forms of scepticism are false depends essentially on the intuitive conviction that we do have reasons our grasp for thinking that the various beliefs questioned by the sceptic are true ~ a conviction that the proponent of this argument must of course reject.

The main objection to externalism rests on the intuition that the basic requirements for epistemic justification are that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true or at the very least, that such a reason be available to him. Since the satisfaction of a externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason. It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-example to externalism. The first of these challenges the necessity justification by appealing to examples of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples of this sort are cases where beliefs produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable on that of someone whose beliefs are produced more normally. Cases of this general sort can be constructed in which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, inasmuch as one whose belief is produced in a more normal way, and hence that externalist accounts of justification must be mistaken.

Perhaps the most interesting reply to this sort of counter-example, on behalf of reliabilism specifically, holds that reliability of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-scenically believed to be, rather than in the world which actually contains the belief being judged. Since the cognitive processes employed in the Cartesian demon case are, we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious further issue is whether or not there is an adequate rationale for this construal of reliabilism, so that the reply is not merely ad hoc.

The second, correlative way of elaborating the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. Here the most widely discussed examples have to do with possible occult cognitive capacities like clairvoyance. Considering the point in application once again to reliabilism specifically, the claim is that a reliable clairvoyant who has no reason to think that he has such a cognitive power, and perhaps even good reasons to the contrary, is not rational or responsible and hence, not epistemologically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.

One sort of response to this latter sort of objection is to ‘bite the bullet’ and insist that such believer e in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example while still stopping far short of a full internalist . But while there is little doubt that such modified versions of externalism can indeed handle particular cases well enough to avoid clear intuitive implausibility, the issue is whether there will always be equally problematic cases that the cannot handle, and whether there is any clear motivation for the additional requirements other than the general internalist view of justification that externalists are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism, holding that epistemic justification requires that there be a justificatory facto r that is cognitively accessible e to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. at the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, this further fact need not be in any way grasped o r cognitive ly accessible to the believer. In effect, of the two premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, while the second can be (and will normally be) purely external. Here the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational responsible way that justification intuitively seems required, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process (and, perhaps, further conditions as well). This makes it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept is epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the common-sen conviction that animals, young children and unsophisticated adult’s posse’s knowledge, though not the weaker conviction (if such a conviction even exists) that such individuals are epistemically justified in their belief. It is also, least of mention, less vulnerable to internalist counter-example of the sort and since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification rather than knowledge?

A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors. Here too a view that appeals to both internal and external elements is standardly classified as an externalist view.

As with justification and knowledge, the traditional view of content has been strongly internalist character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least to show that the belief or thought content that can e properly attributed to a person is dependent on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. ~ not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent of external factors pertaining to the environment, then knowledge of content should depend on knowledge of the these factors ~ which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist must insist that there are no rustication relations of these sorts, that only internally accessible content can either be justified or justify anything else: By such a response appears lame unless it is coupled with an attempt to shows that the externalists account of content is mistaken.

To have a word or a picture, or any other object in one’s mind seems to be one thing, but to understand it is quite another. A major target of the later Ludwig Wittgenstein (1889-1951) is the suggestion that this understanding is achieved by a further presence, so that words might be understood if they are accompanied by ideas, for example. Wittgenstein insists that the extra presence merely raise the same kind of problem again. The better of suggestions in that understanding is to be thought of as possession of a technique, or skill, and this is the point of the slogan that ‘meaning is use’, the idea is congenital to ‘pragmatism’ and hostile to ineffable and incommunicable understandings.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of commonisation and the relationship between words and ideas, sand words and the world.

The most influential idea I e theory of meaning I the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-condition. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by the German mathematician and philosopher of mathematics Gottlob Frége (1848-1925), then was developed in a distinctive way by the early Wittgenstein, and is as leading idea of the American philosopher Donald Herbert Davidson. (1917-2003). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions need not and should not be advanced for being a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally performed by the various types of sentences in the language, and must have some ideate significance of speech act, the claim of the theorist of truth-conditions should rather be targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. It is this claim and its attendant problems, which will be the concern of each in the following.

The meaning of a complex expression is a function of the meaning of its constituents. This is indeed just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a function of the meaning its constituents. On the truth-conditional conception, to give the meaning of sn expressions is the contribution it makes to the truth-conditions of sentence in which it occur. For example terms ~ proper names, indexical, and certain pronouns ~ this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it true. The meaning of a sentence-forming operators as given by stating its contribution to the truth-conditions of a complex sentence, as function of the semantic values of the sentence on which it operates. For an extremely simple, but nevertheless structured language, er can state that contribution’s various expressions make to truth condition, are such as:

A1: The referent of ‘London ‘ is London.

A2: The referent of ‘Paris’ is Paris.

A3: Any sentence of the form ‘a is beautiful’ is true if and only if the referent of ‘a’ is beautiful.

A4: Any sentence of the form ‘a is lager than b’ is true if and only if the referent of ‘a’ is larger than referent of ‘b’.

A5: Any sentence of t he for m ‘its no t the case that ‘A’ is true if and only if it is not the case that ‘A’ is true .

A6: Any sentence of the form ‘A and B’ is true if and only if ‘A’ is true and ‘B’ is true.

The principle’s A1-A6 form a simple theory of truth for a fragment of English. In this the or it is possible to derive these consequences: That ‘Paris is beautiful’ is true if and only if Paris is beautiful, is true and only if Paris is beautiful (from A2 and A3): That ‘London is larger than Paris and it is not the case that London is beautiful, is true if and only if London is larger than Paris and it is not the case that London is beautiful (from A1-A5), and in general, for any sentence ‘A’, this simple language we can derive something of the form ‘A’ is true if and only if ‘A’ .

Yet, theorist of truth conditions should insist that not ever y true statement about the reference o f an expression is fit to be an axiom in a meaning-giving theory of truth for a language. The axiom‘London’ refers to the city in which there was a huge fire in 1666.

This is a true statement about the reference of ‘London’. It is a consequence of a theory which substitute’s tis axiom for A1 in our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. A subject can understand the naming of ‘London’, without knowing that the last-mentioned truth condition, this replacing of axiomatic fit is not to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way which does not presuppose any prior, truth-conditional conception of meaning.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental, first, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is fir a person’s language to truly describable by a semantic theory containing a given semantic axiom.

What can take the charge of triviality first? In more detail, it would run thus: since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions. But this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge tests upon what has been called the ‘redundancy theory of truth’, the theory also known as ‘Minimalism’. Or the ‘deflationary’ view of truth, fathered by the German mathematician and philosopher of mathematics, had begun with Gottlob Frége (1848-1925), and the Cambridge mathematician and philosopher Plumton Frank Ramsey (1903-30). Wherefore, the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, nit centres on the points that ‘it is true that p’ says no more nor less than ‘p’(hence redundancy): That in less direct context, such as ‘everything he said was true’. Or ‘all logical consequences are true’. The predicate functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said or the kind’s of propositions that follow from true propositions. For example: ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive users of the notion, such as ‘science aims at the truth’ or ‘truth is a normative governing discourse’. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objectivity’ conception of truth. But, perhaps, we can have the norm even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that ‘p’, then ‘p’, discourse is to be regulated by the principle that it is wrong to assert ‘p’ when

not-p.

It is, nonetheless, that we can take charge of triviality, since the content of a claim ht the sentence ‘Paris is beautiful’ is true, amounting to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence. If we wish, as knowing its truth-condition, but this gives us no substitute account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests on or upon what has been the redundancy theory of truth. The minimal theory states that the concept of truth is exhaustively by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories, accept that e equivalence principle, as e distinguishing feature of the minimal theory, its claim that the equivalence principle exhausts the notion of truth. It is, however, widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both the minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth condition. The minimal theory of truth has been endorsed by Ramsey, Ayer, and later Wittgenstein, Quine, Strawson, Horwich and ~ confusingly and inconsistently of Frége himself.

The minimal theory treats instances of the equivalence principle as definitional truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as:

‘London is beautiful’ is true if and only if

London is beautiful

can be explained are precisely A1 and A3 in that, this would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does? But that is very implausible: It is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something which is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal theory thus treats as definitional or stimulative something which is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth which has, among the many links which hold it in place, systematic connections with the semantic values of subsentential expressions.

A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truth which go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-language, or whatever. Then the equivalence schemata will not cover all cases, but only those in the theorist’s own language. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independent propositions or thoughts will only post-pone, not avoid, this issue, since at some point principles have to be stated associating these language-dependent entities with sentences of particular languages. The defender of the minimalist theory is that the sentence ‘S’ of a foreign language is best translated by our sentence, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are pervasive plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individuating account of any concept that there exist what may be called a ‘Determination Theory’ for that account ~ that is, a specification on how the account contributes to fixing the semantic value of that concept. The notion of a concept’s semantic value is the notion of something which makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.

It is, also, plausible that there are general constraints on the form of such Determination Theories, constrains which involve truth and which are not derivable from the minimalist ‘s conception. Suppose that concepts are individuated by their possession condition. A possession condition may in various ways make a thinker’s possession of a particular concept dependent upon his relation to his environment. Many possession conditions will mention the links between accept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation to what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, to mention of such experiences in a possession condition dependent in part upon the environmental relations of the thinker. Evan though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Its alternative approach, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied a thinker is to posses that concept and to be capable of having beliefs and other altitudes whose content contain it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individualized by this condition: It is the unique concept ‘C’ to posses which a thinker has to find these forms of inference compelling, without basting them on any further inference or information: From any two premises ‘A’ and ‘B’, ACB can be inferred and from any premise s a relatively observational concepts such as; round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement which individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ doers not. We can also expect to use relatively observational concepts in specifying the kind of experience which have to be mentioned in the possession conditions for relatively observational; concepts. What we must avoid is mention of the concept in question as such within the content of the attitude attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account which was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering of the others. Two of the families which plausibly have this status are these: The family consisting of same simple concepts 0, 1. 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers, ‘there are o so-and-so’s, there is 1 so-and- so’s, . . . and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holist’s’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demand that all the concepts in the family be individuated simultaneously. So one would say something of this form, belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to posses them is to meet such-and-such condition involving the thinker, C1 and C2. For those other possession conditions to individuate properly. It is necessary that there be some ranking of the concepts treated. The possession condition or concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various ways make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to te subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession f that concept relations tn the thicker. Burge (1979) has also argued from intuitions about particular examples that even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary in the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Once, again, some general principles involving truth can, as Horwich has emphasized, be derived from the equivalence schemata using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true if and only if Paris is beautiful and London is beautiful. But no logical manipulations of the equivalence e schemata will allow the derivation of that general constraint governing possession condition, truth and assignment of semantic values. That constraints can of course be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

What is to a greater extent, but to consider the other question, for ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the above axiom A6 for conjunctions? This question may be addressed at two depths of generality. A shallower of levels, in this question may take for granted the person’s possession of the concept of conjunction, and be concerned with what hast be true for the axiom to describe his language correctly. At a deeper level, an answer should not sidestep the issue of what it is to posses the concept. The answers to both questions are of great interest.

When a person means conjunction by ‘and’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of deriving a theorem from a truth theory at some level of unconscious processing? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, the particular work as befitting Davies and Evans, whereby a conception has evolved according to which an axiom like A6, is true of a person’s component in the explanation of his understanding of each sentence containing the words ‘and’, a common component which explains why each such sentence is understood as meaning something involving conjunction. This conception can also be elaborated in computational; terms: As alike to the axiom A6 to be true of a person’s language is for the unconscious mechanism, which produce understanding to draw on the information that a sentence of the form ‘A and B’ is true only if ‘A’ is true and ‘B’ is true. Many different algorithms may equally draw on or open this information. The psychological reality of a semantic theory thus is to involve, Marr’s (1982) given by classification as something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonological theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithm which the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantic, syntactic and phonological theories are answerable to psychological data, and are potentially refutable by them ~ for these linguistic theories do make commitments to the information drawn on or upon by mechanisms in the language user.

This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn on or upon the sentences of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’. S he computational answer we have returned needs further elaboration, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to argue that it has to draw upon a theory if the conditions for possessing a given concept. It is plausible that the concept of conjunction is individuated by the following condition for a thinker to have possession of it:

The concept ‘and’ is that concept ‘C’ to possess which a

thinker must meet the following conditions: He finds inferences

of the following forms compelling, does not find them

compelling as a result of any reasoning and finds them

compelling because they are of there forms:



pCq pCq pq

p q pCq



Here ‘p’ and ‘q’ range over complete propositional thoughts, not sentences. When axiom A6 is true of a person’s language, there is a global dovetailing between this possessional condition for the concept of conjunction and certain of his practices involving the word ‘and’. For the case of conjunction, the dovetailing involves at least this:

If the possession condition for conjunction entails that the

thinker who possesses the concept of conjunction must be

willing to make certain transitions involving the thought p&q, and of the thinker’s semitrance ‘A’ means that ‘p’ and his sentence ‘B’ means that ‘q’ then: The thinker must be willing to make the corresponding linguistic transition involving sentence ‘A and B’.

This is only part of what is involved in the required dovetailing. Given what we have already said about the uniform explanation of the understanding of the various occurrences of a given word, we should also add, that there is a uniform (unconscious, computational) explanation of the language user’s willingness to make the corresponding transitions involving the sentence ‘A and B’.

This dovetailing account returns an answer to the deeper questions because neither the possession condition for conjunction, nor the dovetailing condition which builds upon the dovetailing condition which builds on or upon that possession condition, takes for granted the thinker’s possession of the concept expressed by ‘and’. The dovetailing account for conjunction is an instances of a more general; schematic application to any concept. The case of conjunction is of course, exceptionally simple in several respects. Possession conditions for other concepts will speak not just of inferential transitions, but of certain conditions in which beliefs involving the concept in question are accepted or rejected, and the corresponding dovetailing condition will inherit these features. This dovetailing account has also to be underpinned by a general rationale linking contributions to truth conditions with the particular possession condition proposed for concepts. It is part of the task of the theory of concepts to supply this in developing Determination Theories for particular concepts.

In some cases, a relatively clear account is possible of how a concept can feature in thoughts which may be true though unverifiable. The possession condition for the quantificational concept all natural numbers can in outline run thus: This quantifier is that concept Cx . . . x . . . to posses which the thinker has to find any inference of the form



CxFx



Fn



Compelling, where ‘n’ is a concept of a natural number, and does not have to find anything else essentially containing Cx . . . x . . . compelling. The straightforward Determination Theory for this possession condition is one on which the truth of such a thought CxFx is true only if all natural numbers are ‘F’. That all natural numbers are ‘F’ is a condition which can hold without our being able to establish that it holds. So an axiom of a truth theory which dovetails with this possession condition for universal quantification over the natural numbers will b component of a realistic, non-verifications theory of truth conditions.

Finally, this response to the deeper questions allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory correct rather than another, when the two axioms assigned the same semantic values, but do so by different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level, of what it is for each axiom to be correct for a person’s language will be different accounts. Second, there is a challenge repeatedly made by the minimalist theories of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to that expression. The combined accounts for each of the expressions which comprise a given sentence together constitute a non-circular account of what it is to understand the complete sentence. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.





The first Greek philosophers were interested in theoretical science. They lived in the Ionia region of western Asia Minor and learned from earlier Middle Eastern thinkers, especially those from Babylonia. The Greek philosophers Thales and Anaximander, who lived in the 6th century Bc, reached the revolutionary conclusion that the physical world was governed by laws of nature, not by the whims of the gods. Pythagoras, who also lived in the 6th century Bc, taught that numbers explained the world and started the study of mathematics in Greece. These philosophers called the universe cosmos, meaning “a beautiful thing,” because it had order based on scientific rules, not mythology. Therefore, the philosophers believed in logic. Their insistence that people produce evidence for their beliefs opened the way to modern science and philosophy.

Philosophers called Sophists upset many people in the 5th century Bc by teaching relativism, the belief that there is no universal truth or right and wrong. The most famous Sophist was Protagoras, who said, “Man is the measure of all things.” Socrates (469 ~ 399 Bc) insisted that the Sophists were wrong and that well ~ informed people would never do wrong on purpose. His pupil Plato (428 ~ 347 Bc) became Greece's most famous philosopher. Plato’s complicated works argued universal truths did exist and that the human soul made the body unimportant. Plato founded an academy in Athens that remained in business until ad 529. His pupil Aristotle (384 ~ 322 Bc) turned away from theoretical philosophy to teach about practical ethics, self ~ control, logic, and science. Alexander the Great (whom Aristotle once tutored) sent him information on plants and animals encountered on the march to India. Aristotle's works became so influential that they determined the course of Western scientific thought until modern times.

Hellenistic philosophers concentrated on ethics, helping people achieve tranquillity in a period of change when things seemed out of their control. In the 3rd century Bc, Epicurus taught that people should not be afraid because everything, including our bodies, consists of microscopic atoms that dissolve painlessly at death. Zeno of Citium, who also lived in the 3rd century Bc, founded Stoicism, which taught that life was ruled by fate but that people should still live morally to be in harmony with nature.

The Golden Age of Greek science came in the Hellenistic period, with the greatest advances in mathematics. The geometry theories published by Euclid about 300 Bc still endure. Archimedes (287 ~ 212 Bc) calculated the value of pi (the ratio of the circumference of a circle to its diameter) and invented fluid mechanics. Aristarchus, early in the 3rd century Bc, argued that the earth revolved around the sun, while Eratosthenes accurately calculated the circumference of the earth. Also in the 3rd century Bc, Ctesibius invented machines operated by air and water pressure; Hero later built a rotating sphere powered by steam. These inventions did not lead to practical uses because the technology did not yet exist to produce the pipes, fittings, and screws needed to build powerful machines. Military technology vaulted ahead with the invention of huge catapults and wheeled towers to batter down city walls. Finally, medical scientists made many discoveries, such as the significance of the pulse and the nervous system.

The temple of Athena Nike is part of the Acropolis in the city of Athens in Greece. Built around 420 Bc, it is an excellent example of a classical temple, with ionic columns and a frieze around the top.

Greek sculpture and architecture originally followed Egyptian and Middle Eastern models. Statues of the Archaic Period stood stiffly, staring forward, and temples were rectangular boxes on platforms with columns. Later architecture retained this basic plan, although buildings became much bigger. The style of sculpture and pottery, however, changed dramatically over time.

Sculpture was always painted in bright colours, but over time its poses became more lively and lifelike. By the Classical period, Greeks were carving statues in motion and in more relaxed stances. Their spirited movement and calm expressions suggested the era's confident energy. Statues of gods could be 12 m (40 ft) high and covered with gold and ivory, such as Phidias's Athena in the Parthenon temple at Athens. The female nude became popular. Praxiteles's naked Aphrodite of Cnidus became so renowned that the king of Bithynia offered to pay off the city's entire public debt if he could have the statue. Cnidus refused.

Hellenistic artists began showing emotion in their statues. A 3rd ~ century Bc sculpture from Pergamum showed a defeated Gaul escaping slavery by stabbing himself after having killed his wife. New subjects departed from traditional notions of beauty by representing drunkards, battered boxers, and elderly people with wrinkles.

Greeks painted pottery and turned an everyday item into art. Mycenaean vases featured lively designs of sea creatures and dizzying whorls. Dark Age potters stopped drawing animals, using only geometric patterns. Artists of the Archaic Age, inspired by Middle Eastern pots, reintroduced beasts and people on Greek vases. From then on, vase painters portrayed mythological and everyday scenes with increasing realism. When they switched in the late 6th century Bc from black on red painting to red on black, they could add tiny details that made their pictures come alive.

Greek large ~ scale architecture began with the Minoan and Mycenaean palaces. These multistory buildings had many rooms entered around courtyards. Balconies provided space for viewing festivals in the open areas below. Architects in the later city ~ states designed public structures, such as stoas, government buildings, and temples. Stoas were sheltered walkways placed around the agora to provide shade for conversation. Temples were the largest buildings in the city ~ state. Athens's Parthenon became Greece's most famous building for its size, many columns, and elaborate sculptural decoration. Hellenistic kings outdid the Athenians by erecting huge temples. The temple of Artemis at Ephesus is one of the Seven Wonders of the World.

Greek literature began in the Mycenaean Period as stories told aloud. Mycenaeans used their pictorial script (Linear B) only for the accounting. Fighting from 1200 to 1000 Bc destroyed Greek knowledge of writing, until they adopted an alphabet from Phoenicia in the 8th century Bc to record the exciting poetry of Homer. His epics The Iliad and The Odyssey became Greece's most famous literature. The epics told about the Trojan War and the suffering it caused its heroes and its victims. People loved the stories for their fabulous descriptions of action and for their lessons about the effects of anger and mercy. Hesiod, a poet of the 8th century Bc, also became a lasting favourite with his long stories of how the world began and how justice was the proper guide for life in business and farming. Somewhat later, lyric poets spun short tales of passion and emotion that people loved to sing.

Great literary innovations in drama were produced in Athens in the 5th century Bc. Aeschylus, Sophocles, and Euripides were the most famous authors of tragedies. They based their plays on myths that presented moral issues, especially the danger of hubris (arrogant overconfidence). Theirs colluding infractions were often involved fierce conflicts in families or dangerous interactions between gods and humans. The story of Oedipus, who unknowingly killed his father and married his mother, was one of the most famous tragedies. Plays were performed outdoors at festivals honouring the god Dionysus in a competition sponsored by the city ~ state. Thousands of people packed the theatre. Each author presented three tragedies, followed by a semicomic play featuring satyrs (mythical half ~ man, half ~ animal beings). Actors wore colourful costumes and masks; a chorus danced and sang as part of each play.

Comedies also were performed in these competitions. These plays displayed remarkable freedom of speech in criticizing public policy and making fun of politicians. There plots could be fantastic, for example having a character fly up on a dung beetle to ask the gods for peace. Their language featured jokes, puns, and obscenities. The most famous comic playwright was Aristophanes, who wrote some comedies with powerful women as main characters. Greek comedy in the 4th century Bc changed from political commentary to social satire. Authors such as Menander produced comedies that provided insights into human weaknesses and the complications of everyday life.

Greeks began writing about history in the 5th century Bc. Herodotus and Thucydides wrote long works that stressed eyewitness evidence, the multiple causes of events, and judgments about people's motives. Thucydides, followed by Aristotle, developed political science by analysing how states operated. Hellenistic Greek writers made history more personal and began composing biographies.

The enduring legacy of ancient Greece lies in the brilliance of its ideas and the depth of its literature and art. The greatest ancient evidence of their value is that the Romans, who conquered the Greeks in war, were themselves overcome by admiration for Greek cultural achievements. The first Roman literature, for example, was Homer's Odyssey translated into Latin. Greek art, architecture, philosophy, and religion also inspired Roman artists and thinkers, who used them as starting points for developing their own style of work. All educated Romans learned to read and speak Greek and studied Greek models in rhetoric. Stoicism became the most popular Roman philosophy of life.

Arab philosophers, mathematicians, and scientists who became the leading thinkers of medieval times studied the works of Aristotle and other Greek sources intensely. During the European Renaissance from the 14th to the 16th centuries, people from many walks of life read Greek literature and history. Writing in the late 16th and early 17th centuries, English playwright William Shakespeare ~ based dramas on ancient Greek biographies. Modern playwrights still find inspiration for new works in Athenian drama. Many modern public buildings, such as the United States Supreme Court in Washington, DC, imitate Greek temple architecture. Although the founders of the United States rejected Athenian democracy as too direct and radical, they enshrined democratic equality as a basic principle. Ancient Greeks proved that democracy could be the foundation of a stable government. Pride in the cultural accomplishments of ancient Greece contributed to a feeling of ethnic unity when the modern nation of Greece was carved out of the Ottoman Empire. That pride still characterizes modern Greece and makes it a fierce defender of the Hellenic heritage.

Reliance on logic, allegiance to democratic principles, unceasing curiosity about what lies beneath the surface of things, some healthy respects for the dangers of arrogant overconfidence, and a love of beauty in stories and art remain incredibly important components of Western civilization. Ancient Greece contributed all of these things.

Philosophy is a rational and critical inquiry into basic principles. Philosophy is often divided into four main branches: Metaphysics, the investigation of ultimate reality; Whereas, epistemology was the study of the origins and its validity and limits of knowledge, as ethics, the study of the nature of morality and judgment and aesthetics, the study of the nature of beauty in the fine arts.

The School of Athens (1510 ~ 1511) by Italian Renaissance painter Raphael adorns a room in the Vatican Palace. The artist depicts several philosophers of classical antiquity and portrays each with a distinctive gesture, conveying complex ideas in simple images. In the centre of the composition, Plato and Aristotle dominate the scene. Plato points upward to the world of ideas, where he believes knowledge lies, whereas Aristotle holds his forearm parallel to the earth, stressing observation of the world around us as the source of understanding. In addition, Raphael draws comparisons with his illustrious contemporaries, giving Plato the face of the Renaissance genius Leonardo da Vinci, and Heraclitus, who rests his elbow on a large marble block, the face of the Renaissance sculptor Michelangelo. Euclid, bending down at the right, resembles the Renaissance architect Bramante. Raphael paints his own portrait on the young man in a black beret at the far right. In accordance with Renaissance ideas, artists belong to the ranks of the learned and the fine arts have the stature and merit of the written word.

As used originally by the ancient Greeks, the term philosophy meant the pursuit of knowledge for its own sake. Philosophy comprised all areas of speculative thought and included the arts, sciences, and religion. As special methods and principles were developed in the various areas of knowledge, each area acquired its own philosophical aspect, giving rise to the philosophy of art, of science, and of religion. The term philosophy is often used popularly to mean a set of basic values and attitudes toward life, nature, and society ~ thus the phrase “philosophy of life.” Because the lines of distinction between the various areas of knowledge are flexible and subject to change, the definition of the term philosophy remains a subject of controversy.

Western philosophy from Greek antiquity to the present is surveyed in the remainder of this article. For information about philosophical thought in Asia and the Middle East,

Western philosophy is generally considered to have begun in ancient Greece as speculation about the underlying nature of the physical world. In its earliest form, it was indistinguishable from natural science. The writings of the earliest philosophers no longer exist, except for a few fragments cited by Aristotle in the 4th century Bc and by other writers of later times.

The first philosopher of historical record was Thales, who lived in the 6th century Bc in Miletus, a metropolis on the Ionian coast of Asia Minor. Thales, who was revered by later generations as one of the Seven Wise Men of Greece, was interested in astronomical, physical, and meteorological phenomena. His scientific investigations led him to speculate that all natural phenomena are different forms of one fundamental substance, which he believed to be water because he thought evaporation and condensation to be universal processes. Anaximander, a disciple of Thales, maintained that the first principle from which all things evolve is an intangible, invisible, infinite substance that he called apeiron, “the boundless.” This substance, he maintained, is eternal and indestructible. Out of its ceaseless motion the more familiar substances, such as warmth, cold, earth, air, and fire, continuously evolve, generating in turn the various objects and organisms that make up the recognizable world.

The third great Ionian philosopher of the 6th century Bc, Anaximenes, returned to Thales’s assumption that the primary substance is something familiar and material, but he claimed it to be air rather than water. He believed that the change’s things undergo could be explained in terms of rarefaction (thinning) and condensation of air. Thus Anaximenes was the first philosopher to explain differences in quality in terms of differences in size or quantity, a method fundamental to physical science.

Overall, the Ionian school made the initial radical step from mythological to scientific explanation of natural phenomena. It discovered the important scientific principles of the permanence of substance, the natural evolution of the world, and the reduction of quality to quantity.

The 6th ~ century ~ Bc Greek mathematician and philosopher Pythagoras was not only an influential thinker, but also a complex personality whose doctrines addressed the spiritual as well as the scientific. The following is a collection of short excerpts from studies of Pythagorean teachings and from anecdotes about Pythagoras written by later Greek thinkers, such as the philosopher Aristotle, the historian’s Herodotus and Diodorus Siculus, and the biographer Diogenes Laërtius.

About 530 Bc at Croton (now Crotona), in southern Italy, the philosopher Pythagoras founded a school of philosophy that was more religious and mystical than the Ionian school. It fused the ancient mythological view of the world with the developing interest in scientific explanation. The system of philosophy that became known as Pythagoreanism combined ethical, supernatural, and mathematical beliefs with many ascetic rules, such as obedience and silence and simplicity of dress and possessions. The Pythagoreans taught and practiced a way of life based on the belief that the soul is a prisoner of the body, is released from the body at death, and migrates into a succession of different kinds of animals before reincarnation into a human being. For this reason Pythagoras taught his followers not to eat meat. Pythagoras maintained that the highest purpose of humans should be to purify their souls by cultivating intellectual virtues, refraining from sensual pleasures, and practicing special religious rituals. The Pythagoreans, having discovered the mathematical laws of musical pitch, inferred that planetary motions produce a “music of the spheres,” and developed a “therapy through music” to bring humanity in harmony with the celestial spheres. They identified science with mathematics, maintaining that all things are made up of numbers and geometrical figures. They made important contributions to mathematics, musical theory, and astronomy.

Heraclitus of Ephesus, who was active around 500 Bc, continued the search of the Ionians for a primary substance, which he claimed to be fire. He noticed that heat produces changes in matter, and thus anticipated the modern theory of energy. Heraclitus maintained that all things are in a state of continuous flux, that stability is an illusion, and that only change and the law of change, or Logos, are real. The Logos doctrine of Heraclitus, which identified the laws of nature with a divine mind, developed into the pantheistic theology of Stoicism. (Pantheism is the belief that God and material substance are one, and that divinity is present in all things.)

In the 5th century Bc, Parmenides founded a school of philosophy at Elea, a Greek colony on the Italian peninsula. Parmenides took a position opposite from that of Heraclitus on the relation between stability and change. Parmenides maintained that the universe, or the state of being, is an indivisible, unchanging, spherical entity and that all reference to change or diversity is self ~ contradictory. According to Parmenides, all that exists has no beginning and has no end and is not subject to change over time. Nothing, he claimed, can be truly asserted except that “being is.” Zeno of Elea, a disciple of Parmenides, tried to prove the unity of being by arguing that the belief in the reality of change, diversity, and motion leads to logical paradoxes. The paradoxes of Zeno became famous intellectual puzzles that philosophers and logicians of all subsequent ages have tried to solve. The concern of the Eleatics with the problem of logical consistency laid the basis for the development of the science of logic.

The speculation about the physical world begun by the Ionians was continued in the 5th century Bc by Empedocles and Anaxagoras, who developed a philosophy replacing the Ionian assumption of a single primary substance with an assumption of a plurality of such substances. Empedocles maintained that all things are composed of four irreducible elements: Air, water, earth, and fire, which are alternately combined and separated by two opposite forces, love and strife. By that process the world evolves from chaos to form and back to chaos again, in an eternal cycle. Empedocles regarded the eternal cycle as the proper object of religious worship and criticized the popular belief in personal deities, but he failed to explain the way in which the familiar objects of experience could develop out of elements that are totally different from them. Anaxagoras therefore suggested that all things are composed of very small particles, or “seeds,” which exist in infinite variety. To explain the way in which these particles combine to form the objects that constitute the familiar world, Anaxagoras developed a theory of cosmic evolution. He maintained that the active principle of this evolutionary process is a world mind that separates and combines the particles. His concept of elemental particles led to the development of an atomic theory of matter.

It was a natural step from pluralism to atomism, the theory that all matter is composed of tiny, indivisible particles differing only in simple physical properties such as size, shape, and weight. This step was taken in the 4th century Bc by Leucippus and his more famous associate Democritus, who is generally credited with the first systematic formulation of an atomic theory of matter. The fundamental assumption of Democritus’s atomic theory is that matter is not too divisible but is composed of numerous indivisible particles that are too small for human senses to detect. His conception of nature was thoroughly materialistic (focussed on physical aspects of matter), explaining all natural phenomena in terms of the number, shape, and size of atoms. He thus reduced the sensory qualities of things, such as warmth, cold, taste, and odour, to quantitative differences among atoms ~ that is, to differences measurable in amount or size. The higher forms of existence, such as plant and animal life and even human thought, were explained by Democritus in these purely physical terms. He applied his theory to psychology, physiology, theory of knowledge, ethics, and politics, thus presenting the first comprehensive statement of deterministic materialism, a theory claiming that all aspects of existence rigidly follow, or are determined by, physical laws.

Toward the end of the 5th century Bc., a group of travelling teachers called Sophists became famous throughout Greece. The Sophists played an important role in developing the Greek city ~ states from agrarian monarchies into commercial democracies. As Greek industry and commerce expanded, a class of newly rich, economically powerful merchants began to wield political power. Lacking the education of the aristocrats, they sought to prepare themselves for politics and commerce by paying the Sophists for instruction in public speaking, legal argument, and general culture. Although the best of the Sophists made valuable contributions to Greek thought, the group as a whole acquired a reputation for deceit, insincerity, and demagoguery. Thus the word sophistry has come to signify these moral faults.

The famous maxim of Protagoras, one of the leading Sophists, that “man is the measure of all things,” is typical of the philosophical attitude of the Sophist school. Protagoras claimed that individuals have the right to judge all matters for themselves. He denied the existence of an objective (demonstrable and impartial) knowledge, arguing instead that truth is subjective in the sense that different things are true for different people and there is no way to prove that one person’s beliefs are objectively correct and another’s are incorrect. Protagoras asserted that natural science and theology are of little or no value because they have no impact on daily life, and he concluded that ethical rules need be followed only when it is to one’s practical advantage to do so.

Socrates was a Greek philosopher and teacher who lived in Athens, Greece, in the 400s Bc. He profoundly altered Western philosophical thought through his influence on his most famous pupil, Plato, who passed on Socrates’s teachings in his writings known as dialogues. Socrates taught that every person has full knowledge of ultimate truth contained within the soul and needs only to be spurred to conscious reflection in order to become aware of it. His criticism of injustice in Athenian society led to his prosecution and a death sentence for allegedly corrupting the youth of Athens.

Perhaps the greatest philosophical personality in history was Socrates, who lived from 469 to 399 Bc. Socrates left no written work and is known through the writings of his students, especially those of his most famous pupil, Plato. Socrates maintained a philosophical dialogue with his students until he was condemned to death and took his own life. Unlike the Sophists, Socrates refused to accept payment for his teachings, maintaining that he had no positive knowledge to offer except the awareness of the need for more knowledge. He concluded that, in matters of morality, it is best to seek out genuine knowledge by exposing false pretensions. Ignorance is the only source of evil, he argued, so it is improper to act out of ignorance or to accept moral instruction from those who have not proven their own wisdom. Instead of relying blindly on authority, we should unceasingly question our own beliefs and the beliefs of others in order to seek out genuine wisdom.

Greek philosopher Socrates chose to die rather than cease teaching his philosophy, declaring that “no evil can happen to a good man, either in life or after death.” In 399 Bc Socrates was accused and convicted of impiety and moral corruption of the youth of Athens, Greece. At his trial, he presented a justification of his life. The substance of his speech was recorded by Greek philosopher Plato, a disciple of Socrates, in Plato’s Apology.

Socrates taught that every person has full knowledge of ultimate truth contained within the soul and needs only to be spurred to conscious reflection to become aware of it. In Plato’s dialogue ‘Meno’, for example, Socrates guides an untutored slave to the formulation of the Pythagorean theorem, thus demonstrating that such knowledge is innate in the soul, rather than learned from experience. The philosopher’s task, Socrates believed, was to provoke people into thinking for themselves, than to teach them anything they did not already know. His contribution to the history of thought was not a systematic doctrine but a method of thinking and a way of life. He stressed the need for analytical examination of the grounds of one’s beliefs, for clear definitions of basic concepts, and for a rational and critical approach to ethical problems.

Plato, one of the most famous philosophers of ancient Greece, was the first to use the term philosophy, which means “love of knowledge.” Born around 428 Bc, Plato investigated a wide range of topics. Chief among his ideas was the theory of forms, which proposed that objects in the physical world merely resemble perfect forms in the ideal world, and that only these perfect forms can be the object of true knowledge. The goal of the philosopher, according to Plato, is to know the perfect forms and to instruct others in that knowledge.

Plato, who lived from about 428 to 347 Bc, was a more systematic and positive thinker than Socrates, but his writings, particularly the earlier dialogues, can be regarded as a continuation and elaboration of Socratic insights. Like Socrates, Plato regarded ethics as the highest branch of knowledge; he stressed the intellectual basis of virtue, identifying virtue with wisdom. This view led to the so ~ called Socratic paradox that, as Socrates asserts in the Protagoras, “no man does evil voluntarily.” (Aristotle later noticed that such a conclusion allows no place for moral responsibility.) Plato also explored the fundamental problems of natural science, political theory, metaphysics, theology, and theory of knowledge, and developed ideas that became permanent elements in Western thought.

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates ~ has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

The basis of Plato’s philosophy is his theory of Ideas, also known as the doctrine of Forms. The theory of Ideas, which is expressed in many of his dialogues, particularly the Republic and the Parmenides, divides existence into two realms, an “intelligible realm” of perfect, eternal, and invisible Ideas, or Forms, and a “sensible realm” of concrete, familiar objects. Trees, stones, human bodies, and other objects that can be known through the senses are for Plato unreal, shadowy, and imperfect copies of the Ideas of tree, stone, and the human body. He was led to this apparently bizarre conclusion by his high standard of knowledge, which required that all genuine objects of knowledge be described without contradiction. Because all objects perceived by the senses undergo change, an assertion made about such objects at one time will not be true at a later time. According to Plato, these objects are therefore not completely real. Thus, beliefs derived from experience of such objects are vague and unreliable, whereas the principles of mathematics and philosophy, discovered by inner meditation on the Ideas, constitute the only knowledge worthy of the name. In the Republic, Plato described humanity as imprisoned in a cave and mistaking shadows on the wall for reality; he regarded the philosopher as the person who penetrates the world outside the cave of ignorance and achieves a vision of the true reality, the realm of Ideas. Plato’s concept of the Absolute Idea of the Good, which is the highest Form and includes all others, has been a main source of pantheistic and mystical religious doctrines in Western culture.

What is the nature of knowledge? And of ignorance? The 4th ~ century ~ Bc Greek philosopher Plato used the myth, or allegory, of the cave to illustrate the difference between genuine knowledge and opinion or belief. This distinction is at the heart of one of Plato’s most important works, The Republic. In the first part of the myth of the cave, excerpted here, Plato constructs a dialogue in which he considers the difficult transition from belief based on appearances to true understanding founded in reality.

Plato’s theory of Ideas and his rationalistic view of knowledge formed the foundation for his ethical and social idealism. The realm of eternal Ideas provides the standards or ideals according to which all objects and actions should be judged. The philosophical person, who refrains from sensual pleasures and searches instead for knowledge of abstract principles, finds in these ideals the basis for personal behaviour and social institutions. Personal virtue consists in a harmonious relation among the three parts of the soul: Reason, emotion, and desire. Social justice likewise consists in harmony among the classes of society. The ideal state of a sound mind in a sound body requires that the intellect control the desires and passions, as the ideal state of society requires that the wisest individuals rule the pleasure ~ seeking masses. Truth, beauty, and justice coincide in the Idea of the Good, according to Plato; therefore, art that expresses moral values is the best art. In his rather conservative social program, Plato supported the censorship of art forms that he believed corrupted the young and promoted social injustice.

A student of ancient Greek philosopher Plato, Aristotle shared his teacher’s reverence for human knowledge but revised many of Plato’s ideas by emphasizing methods rooted in observation and experience. Aristotle surveyed and systematized nearly all the extant branches of knowledge and provided the first ordered accounts of biology, psychology, physics, and literary theory. In addition, Aristotle invented the field known as formal logic, pioneered zoology, and addressed virtually every major philosophical problem known during his time. Known to medieval intellectuals as simply “the Philosopher,” Aristotle is possibly the greatest thinker in Western history and, historically, perhaps the single greatest influence on Western intellectual development.

Aristotle, who began study at Plato’s Academy at age 17 in 367 Bc, was the most illustrious pupil of Plato, and ranks with his teacher among the most profound and influential thinkers of the Western world. After studying for many years at Plato’s Academy, Aristotle became the tutor of Alexander the Great. He later returned to Athens to found the Lyceum, a school that, like Plato’s Academy, remained for centuries one of the great centres of learning in Greece. In his lectures at the Lyceum, Aristotle defined the basic concepts and principles of many of the sciences, such as logic, biology, physics, and psychology. In founding the science of logic, he developed the theory of deductive inference ~ a process for drawing conclusions from accepted premises by means of logical reasoning. His theory is exemplified by the syllogism (a deductive argument having two premises and a conclusion), and a set of rules for scientific method.

In his metaphysical theory, Aristotle criticized Plato’s theory of Forms. Aristotle argued that forms could not exist by themselves but existed only in particular things, which are composed of both form and matter. He understood substances as matter organized by a particular form. Humans, for example, are composed of flesh and blood arranged to shape arms, legs, and the other parts of the body.

Nature, for Aristotle, is an organic system of things whose forms make it possible to arrange them into classes comprising species and genera. Each species, he believed, has a form, purpose, and mode of development in terms of which it can be defined. The aim of science is to define the essential forms, purposes, and modes of development of all species and to arrange them in their natural order in accordance with their complexities of form, the main levels being the inanimate, the vegetative, the animal, and the rational. The soul, for Aristotle, is the form of the body, and humans, whose rational souls are a higher form than the souls of other terrestrial species, are the highest species of perishable things. The heavenly bodies, composed of an imperishable substance, or ether, and moved eternally in perfect circular motion by God, are still higher in the order of nature. This hierarchical classification of nature was adopted by many Christian, Jewish, and Muslim theologians in the Middle Ages as a view of nature consistent with their religious beliefs.

Aristotle’s political and ethical philosophy similarly developed out of a critical examination of Plato’s principles. The standards of personal and social behaviour, according to Aristotle, must be found in the scientific study of the natural tendencies of individuals and societies rather than in a heavenly or abstract realm of pure forms. Less insistent therefore than Plato on a rigorous conformity to absolute principles, Aristotle regarded ethical rules as practical guides to a happy and well ~ rounded life. His emphasis on happiness, as the active fulfilment of natural capacities, expressed the attitude toward life held by cultivated Greeks of his time. In political theory, Aristotle agreed with Plato that a monarchy ruled by a wise king would be the ideal political structure, but he also recognized that societies differ in their needs and traditions and believed that a limited democracy is usually the best compromise. In his theory of knowledge, Aristotle rejected the Platonic doctrine that knowledge is innate and insisted that it can be acquired only by generalization from experience. He interpreted art as a means of pleasure and intellectual enlightenment rather than an instrument of moral education. His analysis of Greek tragedy has served as a model of literary criticism.

From the 4th century Bc to the rise of Christian philosophy in the 4th century ad, Epicureanism, Stoicism, Skepticism, and Neoplatonism were the main philosophical school in the Western world. Interest in natural science declined steadily during this period, and these schools concerned themselves mainly with ethics and religion. This was also a period of intense intercultural contact, and Western philosophers were influenced by ideas from Buddhism in India, Zoroastrianism in Persia, and Judaism in Palestine.

Greek philosopher Epicurus was a prolific author and creator of an ethical philosophy based upon the achievement of pleasure and happiness. However, he viewed pleasure as the absence of pain and removal of the fear of death. This bust of Epicurus, a Roman copy of a Greek original, is in the Palazzo Nuovo in Rome, Italy.

In 306 Bc Epicurus founded a philosophical school in Athens. Because his followers met in the garden of his home, they became known as philosophers of the garden. Epicurus adopted the atomistic physics of Democritus, but he allowed for an element of chance in the physical world by assuming that the atoms sometimes swerve in unpredictable ways, thus providing a physical basis for a belief in free will. The overall aim of Epicurus’s philosophy was to promote happiness by removing the fear of death. He maintained that natural science is important only if it can be applied in making practical decisions that help humans achieve the maximum amount of pleasure, which he identified with gentle motion and the absence of pain. The teachings of Epicurus are preserved mainly in the philosophical poem De Rerum Natura (On the Nature of Things) written by the Roman poet Lucretius in the 1st century Bc. Lucretius contributed greatly to the popularity of Epicureanism in Rome.

Emperor Marcus Aurelius ruled the Roman Empire from 161 to 180. His reign was marked by epidemics and frequent wars along the empire’s frontiers. A champion of the poor, Marcus Aurelius reduced the tax burden while founding schools, hospitals, and orphanages. A Stoic, Marcus Aurelius believed that a moral life leads to tranquillity and that moderation and acceptance improve the quality of one’s life.

The Stoic school, founded in Athens about 310 Bc by Zeno of Citium, developed out of the earlier movement of the Cynics, who rejected social institutions and material (worldly) values. Stoicism became the most influential school of the Greco ~ Roman world, producing such remarkable writers and personalities as the Greek slave and philosopher Epictetus in the 1st century ad and the 2nd ~ century Roman emperor Marcus Aurelius, who was noted for his wisdom and nobility of character. The Stoics taught that one can achieve freedom and tranquillity only by becoming insensitive to material comforts and external fortune and by dedicating oneself to a life of virtue and wisdom. They followed Heraclitus in believing the primary substance to be fire and in worshipping the Logos, which they identified with the energy, law, reason, and providence (divine guidance) found throughout nature. The Stoics argued that nature was a system designed by the divinities and believed that humans should strive to live in accordance with nature. The Stoic doctrine that each person is part of God and that all people form a universal family helped break down national, social, and racial barriers and prepare the way for the spread of Christianity. The Stoic doctrine of natural law, which makes human nature the standard for evaluating laws and social institutions, had an important influence on Roman and later Western law.

Roman emperor and philosopher Marcus Aurelius (121 ~ 180 ad) recorded principles of Stoic philosophy in his work, Meditations, which is essentially a notebook of jottings, covering a wide range of subjects. These extracts demonstrate the influence of Stoicism, the predominant philosophy of the time with its emphasis on the virtues of wisdom, courage, justice, and temperance, which free the soul from passion and desire.

The school of Skepticism, which continued the Sophist criticisms of objective knowledge, dominated Plato’s Academy in the 3rd century Bc. The Skeptics discovered, as had Zeno of Elea, that logic is a powerful critical device, capable of destroying any positive philosophical view, and they used it skilfully. Their fundamental assumption was that humanity cannot attain knowledge or wisdom concerning reality, and they therefore challenged the claims of scientists and philosophers to investigate the nature of reality. Like Socrates, the Skeptics insisted that wisdom consisted in awareness of the extent of one’s own ignorance. The Skeptics concluded that the way to happiness lies in a complete suspension of judgment. They believed that suspending judgment about the things of which one has no true knowledge create tranquillity and fulfilment. As an extreme example of this attitude, it is said that Pyrrho, one of the most noted Skeptics, refused to change direction when approaching the edge of a cliff and had to be diverted by his students to save his life.

During the 1st century ad the Jewish ~ Hellenistic philosopher Philo of Alexandria combined Greek philosophy, particularly Platonic and Pythagorean ideas, with Judaism in a comprehensive system that anticipated Neoplatonism and Jewish, Christian, and Muslim mysticism. Philo insisted that the nature of God so far transcended (surpassed) human understanding and experience as to be indescribable; he described the natural world as a series of stages of descent from God, terminating in matter as the source of evil. He advocated a religious state, or theocracy, and was one of the first to interpret the Old Testament for the Gentiles.

Neoplatonism, one of the most influential philosophical and religious schools and an important rival of Christianity, was founded in the 3rd century ad by Ammonius Saccus and his more famous disciple Plotinus. Plotinus based his ideas on the mystical and poetic writings of Plato, the Pythagoreans, and Philo. The main function of philosophy, for him, is to prepare individuals for the experience of ecstasy, in which they become one with God. God, or the One, is beyond rational understanding and is the source of all reality. The universe emanates from the One by a mysterious process of overflowing of divine energy in successive levels. The highest levels in forming the trinity of the One the Logos, which contains the Platonic Forms, and the World Soul, which gives rise to human souls and natural forces. The farther things emanate from the One, according to Plotinus, the more imperfect and evil they are and the closer they approach the limit of pure matter. The highest goal of life is to purify oneself of dependence on bodily comforts and, through philosophical meditation, to prepare oneself for an ecstatic reunion with the One. Neoplatonism exerted a strong influence on medieval thought.

During the decline of Greco ~ Roman civilization, Western philosophers turned their attention from the scientific investigation of nature and the search for worldly happiness to the problem of salvation in another and better world. By the 3rd century ad, Christianity had spread to the more educated classes of the Roman Empire. The religious teachings of the Gospels were combined by the Fathers of the Church with many of the philosophical concepts of the Greek and Roman schools. Of particular importance were the First Council of Nicaea in 325 and the Council of Ephesus in 431, which drew upon metaphysical ideas of Aristotle and Plotinus to establish important Christian doctrines about the divinity of Jesus and the nature of the Trinity.

Saint Augustine, born in what is now Souk ~ Ahras, Algeria, in ad 354, brought a systematic method of philosophy to Christian Theology. Augustine taught rhetoric in the ancient cities of Carthage, Rome, and Milan before his Christian baptism in 387. His discussions of the knowledge of truth and of the existence of God drew from the Bible and from the philosophers of ancient Greece. A vigorous advocate of Roman Catholicism, Augustine developed many of his doctrines while attempting to resolve theological conflicts with Donatism and Pelagianism, two heretical Christian movements.

The process of reconciling the Greek emphasis on reason with the emphasis on religious emotion in the teachings of Christ and the apostles found eloquent expression in the writings of Saint Augustine during the late 4th and early 5th centuries. He developed a system of thought that, through subsequent amendments and elaborations, eventually became the authoritative doctrine of Christianity. Largely as a result of his influence, Christian thought was Platonic in spirit until the 13th century, when Aristotelian philosophy became dominant. Augustine argued that religious faith and philosophical understanding are complementary rather than opposed and that one must “believe in order to understand and understand in order to believe.” Like the Neoplatonists, he considered the soul a higher form of existence than the body and taught that knowledge consists in the contemplation of Platonic ideas as abstract notions apart from sensory experience and anything physical or material.

Saint Augustine, an influential theologian and writer in the Western Church, wrote The City of God in the 5th century. In the following excerpt from the final book, or chapter, of this work, Augustine addressed a number of theological issues, including free will and the resurrection of the faithful. He asserted that God did not deprive people of their free will even when they turned to sin because it was preferable to “bring good out of evil than to prevent the evil from coming into existence.” Augustine believed that the human body would rise after death, transformed into “the newness of the spiritual body” and in paradise these new beings would “rest and see, see and love, love and praise.”

Platonic philosophy was combined with the Christian concept of a personal God who created the world and predestined (determined in advance) its course, and with the doctrine of the fall of humanity, requiring the divine incarnation in Christ. Augustine attempted to provide rational understanding of the relation between divine predestination and human freedom, the existence of evil in a world created by a perfect and all ~ powerful God, and the nature of the Trinity. Late in his life Augustine came to a pessimistic view about original sin, grace, and predestination: the ultimate fates of humans, he decided, are predetermined by God in the sense that some people are granted divine grace to enter heaven and others are not, and human actions and choices cannot explain the fates of individuals. This view was influential throughout the Middle Ages and became even more important during the Reformation of the 16th century when it inspired the doctrine of predestination put forth by Protestant theologian John Calvin.

No comments:

Post a Comment