January 20, 2010

-page 31-

James theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.


Even so, to believe a proposition is to hold it to be true, that the philosophical problem is to align ones precarious states, for which some persons' constituent representation form their personal beliefs, is it, for example, a simple disposition to behaviour? Or a more complicated, complex state that resists identification with any such disposition is compliant with verbalized skills or verbal behaviourism which is essential to belief, concernedly by what is to be said about paralinguistic infants, or non-linguistic animals? An evolutionary approach asks how the cognitive success of possessing the capacity to believe things relates to success in practice. Further topics include discovering whether belief differs from other varieties of assent, such as acceptance, discovering whether belief is an all-or-nothing matter, or to what extent degrees of belief are possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills.

Nevertheless, for Peirces' famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, and we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides complete and orderly sets clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing. All the same, as the founding figure of American pragmatism, perhaps, its best expressage would be found in his essay How to Make our Idea s Clear, (1878), in which he proposes the famous dictum: The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth, and the object representation in this opinion are the real. Also made pioneering investigations into the logic of relations, and of the truth-functions, and independently discovered the quantifier slightly later that Frége. His work on probability and induction includes versions of the frequency theory of probability, and the first suggestion of a vindication of the process of induction. Surprised, Peirces scientific outlook and opposition to rationalize coexisted with admiration for Dun Scotus, (1266-1308), a Franciscan philosopher and theologian, who locates freedom in our ability to turn from desire and toward justice. Scotus characterlogical distinction has directly been admired by such different thinkers as Peirce and Heidegger, he was dubbed the doctor subtilis (short for Dunsman) reflects the low esteem into which scholasticism later fell between humanists and reformers.

To a greater extent, and most important, is the famed apprehension of the pragmatic principle, in so that, C.S. Pierce, the founder of American pragmatism, had been concerned with the nature of language and how it related to thought. From what account of reality did he develop this theory of semiotics as a method of philosophy. How exactly does language relate to thought? Can there be complex, conceptual thought without language? These issues that operate on our thinking and attemptive efforts to draw out the implications for question about meaning, ontology, truth and knowledge, nonetheless, they have quite different takes on what those implications are these issues had brought about the entrapping fascinations of some engagingly encountered sense for causalities that through which its overall topic of linguistic transitions was grounded among furthering subsequential developments, that those of the earlier insistences of the twentieth-century positions. That to lead by such was the precarious situation into bewildering heterogeneity, so that princely it came as of a tolerable philosophy occurring in the early twenty-first century. The very nature of philosophy is itself radically disputed; analytic, continental, post-modern, Critical theory, feminist and non-Western are all prefixes that give a different meaning when joined to philosophy. The variety of thriving different schools, the number of professional philologers, the proliferation of publications, the developments of technology in helping reach all manifest a radically different situation to that of one hundred years ago. Sharing some common sources with David Lewis, the German philosopher Rudolf Carnap (1891-1970) articulated a doctrine of linguistic frameworks that was radically relativistic in its implications. Carnap was influenced by the Kantian idea of the constitution of knowledge: That our knowledge is in some sense the end result of a cognitive process. He also shared Lewis pragmatism and valued the practical application of knowledge. However, as empiricism, he was headily influenced by the development of modern science, regarding scientific knowledge the paradigm of knowledge and motivated by a desire to be rid of pseudo-knowledge such as traditional metaphysics and theology. These influences remain constant as his work moved though various distinct stages and then he moved to live in America. In 1950, he published a paper entitled Empiricism, Semantics and Ontology in which he articulated his views about a linguistic framework.

When an organized integrated whole made up of diverse but interrelated and interdependent parts, the capacity of the system precedes to be real that something that stands for something else by reason that being in accordance with or confronted to action we think it not as it might be an imperfection in character or an ingrained moral weakness predetermined to be agreed upon by all who investigate. The matter to which it stands, in other words, that, if I believe that it is really the case that p, then I except that if anyone were to inquire into the finding of its state of internal and especially the quality values, state, or conditions of being self-complacent as to poise of a comparable satisfactory measure of whether p, would arrive at the belief that p it is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that would-bees are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that entitles firmly held points of view or way of regarding something capable of being constructively applied, that only to presuppose in the lesser of views or ways of regarding something, at least the conservative position is posited by the relevant discourse that exists or at least exists: The standard example is idealism, which reality is somehow mind-curative or mind-co-ordinated - that real objects comprising the external worlds are dependently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of idealism enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding of the nature of the real infraction, but even the resulting charger that we attributively acknowledge for it.

Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real 'x' may be contrasted with a fake, a failed 'x', a near 'x', and so on. To that something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the unreal as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that non-existence of all things, and as the product of logical confusion of treating the term nothing as itself a referring expression of something that does not exist, instead of a quantifier, Wherefore, the important point is that the treatment holds off thinking of something, as to exist of nothing, and then kin as kinds of names. Formally, a quantifier will bind a variable, turning an open sentence with some distinct free variables into one with, n - 1 (an individual letter counts as one variable, although it may recur several times in a formula). (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as, Nothing is all around us talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate is all around us has appreciation. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of nothing, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between existentialist and analytic philosophy, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.

A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other transpositional problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, for these of denial are forsaken of a real existence by some kind of thing or some kind of fact, that, conceivably are in accord given to provide, or if by formal action bestow or dispense by some action to fail in response to physical stress, also by their stereotypical allurement of affairs so that a means of determines what a thing should be, however, each generation has its on standards of morality. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers entered round Anthony Dummett (1925), to which is borrowed from the intuitivistic critique of classical mathematics, and suggested that the unrestricted use of the principle of bivalence is the trademark of realism. However, this has to overcome counter examples both ways, although Aquinas was a moral realist, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence quite effectively in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects really exist and are independent of us and our mental states) with transcendental idealism (the phenomenal world as whole reflects the structures imposed on it by the activity of our minds as we render its intelligibility to us). In modern philosophy the orthodox opposition to realism has been from the philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of quantification is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it's created by sentences like this exists where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. This exists is, therefore, unlike Tamed tigers exist, where a property is said to have an instance, for the word this and does not locate a property, but only correlated by an individual.

Possible worlds seem plausibly able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.

The philosophical ponderosity over which to set upon the unreal, as belonging to the domain of Being, as, there is little for us that can be said with the philosophers study. So it is not apparent that there can be such a subject for being by it. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of why is there something and not of anything? Prompting over logical reflection on what it is for a universal to have an instance, and as long history of attempts to explain contingent existence, by which reference is a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with having a helpful or auspicious character. Only to be conforming to a high standard of morality or virtuosity, such in an acceptable or desirable manner that can be fond, as something that is adaptively viewed to it's very end, or its resultant extremity might for which of its essence, is plainly basic yet underlying or constituting unity, meaning or form, perhaps, the essential nature as so placed on the reference too conveyed upon the positivity that is good or God, however, whose relation with the everyday world remains shrouded by its own nakedness. The celebrated argument for the existence of God was first propounded by an Anselm in his Proslogin. The argument by defining God as something other than that which nothing is greater can be conceived, but God then exists in our understanding, only that we sincerely understand this concept. However, if he only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive in having something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premises are that all natural things are dependent for their existence on something else. The totality of dependence brings within itself the primary dependence upon a non-dependent or necessarily existent being of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other things of a similar kind exist, the question merely arises by its gainfully obtained achievement. So, in at least, respectively, God ends the querying of questions, that, He must stand alone insofar as, He must exist of idealistic necessities: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the arguments proving not that because our idea of God is that of, quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute presupposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to define something as unsurpassable great, if it exists and is perfect in every possible world. Then, to allow for that which through its possibilities, is potentially that of what is to be seen as an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily 'p', we endorse the ground working of its necessities, 'P'. A symmetrical proof starting from the assumption that it is possibly that such a being does not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act within circumstances forwarded through the anticipated forthcoming, in that, as a result by omission the same traitfully recognized and acknowledged find their results as they occur from whatever happens. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, Doing nothing can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about results, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

And therefore, in some sense available to reactivate a new body, it therefore, is not I who survives body death, but I may be resurrected in the same personalized body that becomes reanimated by the same form, that which Aquinas's account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficultly at this point led the logical positivist to abandon the notion of an epistemological foundation together, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connexion between thought and experience through basic sentence s depends on an untenable myth of the given. The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by the French man of letters and philosopher Voltaire that was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that their world of nature and of thought becomes identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, from whom does he equate within the freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegels method is at it's most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefls progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than reason is in the engine room. Although, it is such that speculations about the history may that it is continued to be written, notably: In later examples, by the late 19th century large-scale speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such as history are objective and legitimate; nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to relive that past thought, knowing the deliberations of past agents, as if they were the historian's own. The most influential British writer that simulated the likeness upon this theme was the philosopher and historian George Collingwood (1889-1943). Whose, The Idea of History (1946), contained an extensive defence of the Verstehe approach, but it is nonetheless, the explanation from their actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a theory, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have in me that in of myself have the human ability of knowing the deliberations of past agents as if they were the historian's own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby an understanding of what they experience and thought.

The views that every day, attribution intentions, were in the belief and meaning to other persons and proceeded via tacit use of a theory that enables one to construct within such definable and non-definable translatable explanations. That any-one explanation might be in giving some reason that one can be understood. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evincing regularities, in that out-of-the-ordinary explications were shown or explained in the principle representable without them. Perhaps, this is liable to be overturned by newer and better theories, and on, nonetheless, the main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Verstehen, as aforementioned, is a German understanding to denote the understanding we have of human activities. In the Verstehen tradition these are understood from within, by means that are opposed to knowing something by objective observation, or by placing it in a network of scientific regularities of a theory that enables one to construct these interpretations as explanations of their doings. The view is commonly held along with functionalism, according to which psychological states are theoretical entities identified by the network of their causes and effects. However, The main problem with seeing our understanding of others s the outcome of a piece of theorizing in the non-existence of a medium in which this theory can be couched, as the child learns simultaneously the mind of others and the meaning of terms in its native language. Nonetheless, our understanding of others is not gained by the tacit use of a theory, enabling us to infer what thoughts or intentions explain their actions, but by re-living the situation in their moccasins or from their point of view, and thereby understanding what they experienced and thought, and therefore expressed. Theories may be thought of as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is principle describable without them, as liable to be overturned by newer and better theories, and so on.

The exact difference is controversial, and one such approach is that of knowing to what measure might be obtainably in oneself, and, perhaps, embracing of a gainful expression and for itself of re-living a process of empathy the mental life of the person to be understood. But other less subjective suggestions are also found. The question of whether there is a method distinct from that of science to be used in human contexts, and so whether Vertehen is necessarily the method of the social as opposed to the natural sciences, is still open.

Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas' account, that an individual has no advantageous privilege in self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the knower and what there is to be known: A human's corporal nature, therefore, requires that knowledge start with sense perception. Nonetheless, the same limitations that do not apply by-themselves but bring further the levelling stabilities that are contained within the towering hierarchical verticality, such as the celestial heavens that open of themselves into bringing forth of night to their angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance, of five arguments: They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the world demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world requires the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God's essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of himself, but is not he.

The immediate problem availed of ethics is posed by the English philosopher Phillippa Foot, in her The Problem of Abortion and the Doctrine of the Double Effect (1967). Hypothetically, if by some occurring chance that there takes place the unfortunates of the threat that a runaway train or trolley cars have reached the limitations of boundaries by which case a section in the track that is under construction is restrictively impassable. One person is working on one part and five on the other and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to it, it will enter the branch with its five employees that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, whereby its affirmative apparency involves no other that yourself, in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, although one is to learn to its situation by means through which it's finding integrity or principles may deny it.

Describing events that haphazardly happen does not of them permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the will and free will. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing by doing another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created by and for themselves. Kant mysteriously foresees the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements that necessitation or determinacy of the future hold to events, as the Scottish philosopher, historian and essayist David Hume thought, that part of philosophy which investigates the fundamental structures of the world and their fundamental kinds of things that exist, terms like object, fact, property, relation and category are, technical terms used to make sense of these most basic features of realty. Likewise this is a very strong case against deviant logic. However, just as with Hume against miracles, it is quite conservative in its implications.

How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects is largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the must of causal necessitation. Particular examples of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event 'C', there will be one antecedent state of nature 'N', and a law of nature 'L', such that given 'L', 'N' will be followed by 'C'. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state 'N' and the laws. Since determinism is recognized as universal, these in turn were tampering and damaged, and thus, were travelled backwards to events, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be and from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to fix upon one among alternatives as the one to be taken, accepted or adopted as of yours to make a choice, as having that appeal to a fine or highly refined compatibility, again, you chose as you did, if only to the finding in its view as irrelevance on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is more substantiative, real notions of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, Wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical sets of suppositional action, that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if an action is not the end of such a chain, then either or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for it's ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia - factoring its trued condition that one can come to a conclusion about.

A mental act of will or try is of whose presence is sometimes supposed as to make the difference, which substantiates its theories between philosophy and science, and hence is called naturalism, however, there is somewhat of a consistent but communal direction in our theories about the world, but not held by other kinds of theories. How this relates to scepticism is that scepticism is tackled using scientific means. The most influential American philosopher of the latter of the 20th century is Willard Quine (1908-2000), holds that this is not question-begging because the sceptical challenge arises using scientific knowledge. For example, it is precisely because the sceptic has knowledge of visual distortion from optics that he can raise the problem of the possibility of deception, the sceptical question is not mistaken, according to Quine: It is rather than the sceptical rejection of knowledge is an overreaction. We can explain how perception operates and can explain the phenomenon of deception also. One response to this view is that Quine has changed the topic of epistemology by using this approach against the sceptics. By citing scientific (psychological) evidence against the sceptic, Quine is engaged in a deceptive account of the acquisition of knowledge, but ignoring the normative question of whether such accounts are justified or truth-conductions. Therefore, he has changed the subject, and by showing that normative issues can and do arise in this naturalized context. Quines' conception holds that there is no genuine philosophy independent of scientific knowledge, nonetheless, there to be shown the different ways of resisting the sceptics setting the agenda for epistemology has been significant for the practice of contemporary epistemology.

The contemporary epistemology of the same agenda requirements as something wanted or needed in the production to satisfy the essential conditions for prerequisite reactivates held by conclusive endings. Nonetheless, the untypical view of knowledge with basic, non-inferentially justified beliefs as these are the Foundationalist claims, otherwise, their lays of some non-typically holistic and systematic and the Coherentists claims? What is more, is the internalized-externalist debate? Holding that in order to know, one has to know that one knows, as this information often implies a collection of facts and data, a man's judgement cannot be better than the information on which he has based on. The reason-sensitivities under which a belief is justified must be accessible in principle to the subject holding that belief. Perhaps, this requirement proposes that this brings about a systematic application, yet linking the different meaning that expressions would have used at different articulations beyond that of any intent of will is to be able to desire an outcome and to purpose to bring it about. That what we believe maybe determined not as justly by its evidence alone, but by the utility of the resulting state of mind, therefore to go afar and beyond the ills toward their given advocacies, but complete the legitimization and uphold upon a given free-will, or to believe in God. Accountably, such states of mind have beneficial effects on the believer, least of mention, that the doctrine caused outrage from the beginning. The reactionism accepts the conflict and denies that of having real freedom or responsibility. However, even if our actions are caused, it can often be true or that you could have done otherwise, if you had chosen, and this may be enough to render you liable, in that previous events will have caused you to choose as you did, and in doing so has made applicably sensitive in those whose consideration is to believe of their individual finding. Nonetheless, in Kant, while the empirical or phenomenal self is determined and not free, therefore, because of the definition of determinism breaks down, or postulating a special category of caused acts or volition, or suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, and it is only through confusing them that the problem seems urgent. None of these avenues had gained general popularity, but it is an error to confuse determinism and fatalism.

Only that the quality values or states for being aware or cognizant of something as kept of developments, so, that imparting information could authorize a dominant or significant causality, whereby making known that there are other ways or alternatives of talking about the world, so as far as good, that there are the resources in philosophy to defend this view, however, that all our beliefs are in principally revisable, none stand absolutely. There are always alternative possible theories compatible with the same basic evidence. Knowing is too difficult to obtainably achieve in most normal contexts, obtainably grasping upon something, as between those who think that knowledge can be naturalized and those who don't, holding that the evaluative notions as used in epistemology can be explained in terms of something than to deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse.

Foundationalist theories of justification argue that there are basic beliefs that are justifiably non-inferential, both in ethics and epistemology. Its action of justification or belief is justified if it stands up to some kind of critical reflection or scrutiny: A person is then exempt from criticism on account of it. A popular line of thought in epistemology is that only a belief can justify another belief, as can the implication that neither experience nor the world plays a role in justifying beliefs leads quickly to Coherentism.

When a belief is justified, that justification is usually itself another belief, or set of beliefs. There cannot be an infinite regress of beliefs, the inferential chain cannot circle back on itself without viciousness, and it cannot stop in an unjustified belief. So that, all beliefs cannot be inferentially justified. The Foundationalist argues that there are special basic beliefs that are self-justifying in some sense or other - for example, primitive perceptual beliefs that don't require further beliefs in order to be justified. Higher-level beliefs are inferentially justified by means of the basic beliefs. Thus, Foundationalism is characterized by two claims: (1) there exist cases in which the best explanations are still not all that is convincing, but, maintain that the appropriated attitude is not to believe them, but only to accept them at best as empirically adequate. So, other desiderata than pure explanatory successes are understandable of justified non-inferential beliefs, and (2) Higher-level beliefs are inferentially justified by relating them to basic beliefs.

A categorical notion in the work as contrasted in Kantian ethics show of a language that their structure and relations amongst the things that cannot be said, however, the problem of finding a fundamental classification of the kinds of entities recognized in a way of thinking. In this way of thinking accords better with an atomistic philosophy than with modern physical thinking, which finds no categorical basis underlying the notions like that of a charge, or a field, or a probability wave, that fundamentally characterized things, and which are apparently themselves dispositional. A hypothetical imperative and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse from which it is placed and only given by some antecedent desire or project, If you want to look wise, stays quiet. The injunction to stay quiet is only applicable to those with the antecedent desire or inclination: If one has no desire to look wise, the narrative dialogues seem of requiring the requisite too advisably taken under and succumbing by means of, where each is maintained by a categorical imperative which cannot be so avoided, it is a requirement that binds anybody or anything, regardless of their inclination. It could be repressed as, for example, Tell the truth (regardless of whether you want to or not). The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: If you crave drink, don't become a bartender may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: Acting only on that maxim through which you can at the same time will that it should become universal law, (2) the formula of the law of nature: Act as if the maxim of your actions were to become thoroughly self-realized in that your volition is maintained by a universal law of nature, (3) the formula of the end-in-itself, Act in such a way that you always treat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end, (4) the formula of autonomy, or consideration; The wilfulness of every rational being that commends beliefs, actions, processes as appropriate, yet in cases of beliefs this means likely to be true, or at least likely to be true from within the subjective view. Nonetheless, the cognitive processes are rational insofar as they provide likely means to an end, however, on rational action, such as the ends themselves being rational, are of less than under different ways or manner and diversely various as placed under different conditions as might otherwise have in other ways. If not, dissimilarities unidentified part, something less than the whole to which it belongs of a division that powers indifferent and significantly separate, that to become or cause to become disunited or disjoined, as distinguished from two or, more than others are undissembled or single ~ mindedly to given of its choice. Clearly free from obscurity or ambiguity his accountable understanding is readily perceived or apparently as it is agreeable to reason as to offer an interpretative explanation that the idea that something conveys to the mind an acceptation of intendment, so that a rational allotment of which measure becomes as part or proportion that exhibits for which to fix upon one among alternatives as the one to be taken, accepted or adopted of its meaning. A free will is to reconcile our everyday consciousness of predetermining us as agents, with the best view of what science tells us that we are.

A central object in the study of Kant's ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kants own application of the notions is always convincing: One cause of confusion is relating Kants ethical values to theories such as; Expressionism in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something unconditional or necessary such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of prescriptivism in fact equates the two functions. A further question is whether there is an imperative logic. Hump that bale seems to follow from Tote that barge and hump that bale, this is followed from its windy and its raining: But, it is harder to say how to include other forms, does Shut the door or shut the window, with a strong following form: Shut the window, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other purposive account of commanding that without satisfying the other would otherwise give cause to change or change cause of direction of diverting application and pass into turning it into a variation of ordinary deductive logic.

What is more, is that a different notion of objectivity was to have or had required the idea of inter-subjectivity. Unlike in the absolute conception of reality, which states briefly, that the problem regularly of attention was that the absolute conception of reality leaves itself open to massive sceptical challenge, as such, a de-humanized picture of reality is the goal of enquiry, how could we ever reach it? Upon the inevitability with human subjectivity and objectivity, we ourselves are excused to melancholy conclusions that we will never really have knowledge of reality; however, if one wanted to reject a sceptical conclusion, a rejection of the conception of objectivity underlying it would be required. Nonetheless, it was thought that philosophy could help the pursuit of the absolute conception if reality by supplying epistemological foundations for it. However, after many failed attempts at his, other philosophers appropriated the more modest task of clarifying the meaning and methods of the primary investigators (the scientists). Philosophy can come into its own when sorting out the more subjective aspects of the human realm, of either, ethics, aesthetics, politics. Finally, it goes without saying, what is distinctive of the investigation of the absolute conception is its disinterestedness, its cool objectivity, it demonstrable success in achieving results. It is purely theory - the acquisition of a true account of reality. While these results may be put to use in technology, the goal of enquiry is truth itself with no utilitarian end in view. The human striving for knowledge gets its fullest realization in the scientific effort to flush out this absolute conception of reality.

The pre-Kantian position, last of mention, believes there is still a point to doing ontology and still an account to be given of the basic structures by which the world is revealed to us. Kant's anti-realism seems to drive from rejecting necessity in reality: Not to mention that the American philosopher Hilary Putnam (1926- ) endorses the view that necessity is relative to a description, so there is only necessity in being relative to language, not to reality. The English radical and feminist Mary Wollstonecraft (1759-97), says that even if we accept this (and there are in fact good reasons not to), it still doesn't yield ontological relativism. It just says that the world is contingent - nothing yet about the relative nature of that contingent world.

Advancing such, as preserving contends by sustaining operations to maintain that, at least, some significantly relevant inflow of quantities was differentiated of a positive incursion of values, whereby developments are, nonetheless, intermittently approved as subjective amounts in composite configurations of which all pertain of their construction. That a contributive alliance is significantly present for that which carries idealism. Such that, expound upon those that include subjective idealism, or the position to better call of immaterialism, and the meaningful associate with which the Irish idealist George Berkeley, has agreeably accorded under which to exist is to be perceived as transcendental idealism and absolute idealism. Idealism is opposed to the naturalistic beliefs that mind alone is separated from others but justly as inseparable of the universe, as a singularity with composite values that vary the beaten track whereby it is second to none, this permits to incorporate federations in the alignments of ours to be understood, if, and if not at all, but as a product of natural processes.

The pre-Kantian position - that the world had a definite, fixed, absolute nature that was not constituted by thought - has traditionally been called realism. When challenged by new anti-realist philosophies, it became an important issue to try to fix exactly what was meant by all these terms, such that realism, anti-realism, idealism and so on. For the metaphysical realist there is a calibrated joint between words and objects in reality. The metaphysical realist has to show that there is a single relation - the correct one - between concepts and mind-independent objects in reality. The American philosopher Hilary Putnam (1926- ) holds that only a magic theory of reference, with perhaps noetic rays connecting concepts and objects, could yield the unique connexion required. Instead, references make sense in the context of the unveiling signs for certain purposes. Before Kant there had been proposed, through which is called idealists - for example, different kinds of neo-Platonic or Berkeleys philosophy. In these systems there is a declination or denial of material reality in favour of mind. However, the kind of mind in question, usually the divine mind, guaranteed the absolute objectivity of reality. Kant's idealism differs from these earlier idealisms in blocking the possibility of the verbal exchange of this measure. The mind as voiced by Kant in the human mind and it isn't capable of unthinkable by us, or by any rational being. So Kants Version of idealism results in a form of metaphysical agnosticism, nonetheless, the Kantian views they are rejected, rather they argue that they have changed the dialogue of the relation of mind to reality by submerging the vertebra that mind and reality is two separate entities requiring linkage. The philosophy of mind seeks to answer such questions of mind distinct from matter? Can we define what it is to be conscious, and can we give principled reasons for deciding whether other creatures are conscious, or whether machines might be made so that they are conscious? What is thinking, feeling, experiences, remembering? Is it useful to divide the functions of the mind up, separating memory from intelligence, or rationality from sentiment, or do mental functions form an integrated whole? The dominant philosophers of mind in the current western tradition include varieties of physicalism and functionalism. In following the same direct pathway, in that the philosophy of mind, functionalism is the modern successor to behaviouralism, its early advocates were the American philosopher Hilary Putnam and Stellars, assimilating an integration of guiding principle under which we can define mental states by a triplet of relations: What typically causes them affectual causalities that they have on other mental states and what affects that they had toward behaviour. Still, functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or realization of the program the machine is running the principled advantages of functionalism, which include its calibrated joint with which the way we know of mental states both of ourselves and others, which is via their effectual behaviouralism and other mental states as with behaviouralism, critics charge that structurally complicated and complex items that do not bear mental states might. Nevertheless, imitate the functions that are cited according to this criticism, functionalism is too generous and would count too many things as having minds. It is also, queried to see mental similarities only when there is causal similarity, as when our actual practices of interpretation enable us to ascribe thoughts and derive to persons whose causal structure may be rather different from our own. It may then seem ad though beliefs and desires can be variably realized in causal architecture, just as much as they can be in different Neurophysiologic states.

The peripherally viewed homuncular functionalism seems to be an intelligent system, or mind, as may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in coordinating with each other. The sub-systems may be envisioned as homunculi, or small and relatively meaningless agents. Wherefore, the archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make up a machine that can play chess, write dictionaries, etc.

Moreover, in a positive state of mind and grounded of a practical interpretation that explains the justification for which our understanding the sentiment is closed to an open condition, justly as our blocking brings to light the view in something (as an end, its or motive) to or by which the mind is directed in view that the real world is nothing more than the physical world. Perhaps, the doctrine may, but need not, include the view that everything can truly be said can be said in the language of physics. Physicalism is opposed to ontology's including abstract objects, such as possibilities, universals, or numbers, and to mental events and states, insofar as any of these are thought of as independent of physical things, events, and states. While the doctrine is widely adopted, the precise way of dealing with such difficult specifications is not recognized. Nor to accede in that which is entirely clear, still, how capacious a physical ontology can allow itself to be, for while physics does not talk in terms of many everyday objects and events, such as chairs, tables, money or colours, it ought to be consistent with a physicalist ideology to allow that such things exist.

Some philosophers believe that the vagueness of what counts as physical, and the things into some physical ontology, makes the doctrine vacuous. Others believe that it forms a substantive meta-physical position. Our common ways of framing the doctrine are in terms of supervenience. Whilst it is allowed that there are legitimate descriptions of things that do not talk of them in physical terms, it is claimed that any such truth s about them supervene upon the basic physical facts. However, supervenience has its own problems.

Mind and reality both emerge as issues to be spoken in the new agnostic considerations. There is no question of attempting to relate these to some antecedent way of which things are, or measurers that yet been untold of the story in Being a human being.

The most common modern manifestation of idealism is the view called linguistic idealism, which we create the world we inhabit by employing mind-dependent linguistics and social categories. The difficulty is to give a literal form to this view that does not conflict with the obvious fact that we do not create worlds, but find ourselves in one.

Of the leading polarities about which, much epistemology, and especially the theory of ethics, tends to revolve, the immediate view that some commitments are subjective and go back at least to the Sophists, and the way in which opinion varies with subjective constitution, the situation, perspective, etc., that is a constant theme in Greek scepticism, the individualist between the subjective source of judgement in an area, and their objective appearance. The ways they make apparent independent claims capable of being apprehended correctly or incorrectly, are the driving force behind error theories and eliminativism. Attempts to reconcile the two aspects include moderate anthropocentrism, and certain kinds of projectivism.

The standard opposition between those how affirmatively maintain of vindication and those who manifest by reason as by something of a disclaimer and disavow the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals and moral or aesthetic properties are examples. A realist about a subject-matter 'S' may hold (1) overmuch in excess that the overflow of the kinds of things described by S exist: (2) that their existence is independent of us, or not an artefact of our minds, or our language or conceptual scheme, (3) that the statements we make in S are not reducible to about some different subject-matter, (4) that the statements we make in S have truth conditions, being straightforward description of aspects of the world and made true or false by facts in the world, (5) that we are able to attain truth about 'S', and that it is appropriate fully to believe things we claim in 'S'. Different oppositions focus on one or another of these claims. Eliminativists think the 'S'; Discourse should be rejected. Sceptics either deny that of (1) or deny our right to affirm it. Idealists and conceptualists disallow of (2) reductionists objects of all from which that has become of denial (3) while instrumentalists and projectivists deny (4), Constructive empiricalists deny (5) Other combinations are possible, and in many areas there are little consensuses on the exact way a reality/antireality dispute should be constructed. One reaction is that realism attempts to look over its own shoulder, i.e., that it believes that as well as making or refraining from making statements in 'S', we can fruitfully mount a philosophical gloss on what we are doing as we make such statements, and philosophers of a verificationist tendency have been suspicious of the possibility of this kind of metaphysical theorizing, if they are right, the debate vanishes, and that it does so is the claim of minimalism. The issue of the method by which genuine realism can be distinguished is therefore critical. Even our best theory at the moment is taken literally. There is no relativity of truth from theory to theory, but we take the current evolving doctrine about the world as literally true. After all, with respect of its theory-theory - like any theory that people actually hold - is a theory that after all, there is. That is a logical point, in that, everyone is a realist about what their own theory posited, precisely for what remains accountable, that is the point of the theory, to say what there is a continuing inspiration for back-to-nature movements, is for that what really exists.

There have been a great number of different sceptical positions in the history of philosophy. Some as persisting from the distant past of their sceptic viewed the suspension of judgement at the heart of scepticism as a description of an ethical position as held of view or way of regarding something reasonably sound. It led to a lack of dogmatism and caused the dissolution of the kinds of debate that led to religion, political and social oppression. Other philosophers have invoked hypothetical sceptics in their work to explore the nature of knowledge. Other philosophers advanced genuinely sceptical positions. Here are some global sceptics who hold we have no knowledge whatsoever. Others are doubtful about specific things: Whether there is an external world, whether there are other minds, whether we can have any moral knowledge, whether knowledge based on pure reasoning is viable. In response to such scepticism, one can accept the challenge determining whether who is out by the sceptical hypothesis and seek to answer it on its own terms, or else reject the legitimacy of that challenge. Therefore some philosophers looked for beliefs that were immune from doubt as the foundations of our knowledge of the external world, while others tried to explain that the demands made by the sceptic are in some sense mistaken and need not be taken seriously. Anyhow, all are given for what is common.

The American philosopher C.I. Lewis (1883-1946) was influenced by both Kants division of knowledge into that which is given and which processes the given, and pragmatism emphasis on the relation of thought to action. Fusing both these sources into a distinctive position, Lewis rejected the shape dichotomies of both theory-practice and fact-value. He conceived of philosophy as the investigation of the categories by which we think about reality. He denied that experience conceptualized by categorized realities. That way we think about reality is socially and historically shaped. Concepts, his meanings that are shaped by human beings, are a product of human interaction with the world. Theory is infected by practice and facts are shaped by values. Concept structures our experience and reflects our interests, attitudes and needs. The distinctive role for philosophy is to investigate the criteria of classification and principles of interpretation we use in our multifarious interactions with the world. Specific issues come up for individual sciences, which will be the philosophy of that science, but there are also common issues for all sciences and non-scientific activities, reflection on which issues is the specific task of philosophy.

The framework idea in Lewis is that of the system of categories by which we mediate reality to ourselves: 'The problem of metaphysics is the problem of the categories' and 'experience doesn't categorize itself' and 'the categories are ways of dealing with what is given to the mind.' Such a framework can change across societies and historical periods: 'our categories are almost as much a social product as is language, and in something like the same sense.' Lewis, however, didn't specifically thematize the question that there could be alterative sets of such categories, but he did acknowledge the possibility.

Sharing some common sources with Lewis, the German philosopher Rudolf Carnap (1891-1970) articulated a doctrine of linguistic frameworks that was radically relativistic its implications. Carnap had a deflationist view of philosophy, that is, he believed that philosophy had no role in telling us truth about reality, but rather played its part in clarifying meanings for scientists. Now some philosophers believed that this clarification project has for itself led to further philosophical investigations and special philosophical truth about meaning, truth, necessity and so on, however Carnap rejected this view. Now Carnaps actual position is fewer libertarians than it actually appears, since he was concerned to allow different systems of logic that might have different properties useful to scientists working on diverse problems. However, he doesn't envisage any deductive constraints on the construction of logical systems, but he does envisage practical constraints. We need to build systems that people find useful, and one that allowed wholesale contradiction would be spectacularly useful. There are other more technical problems with this conventionalism.

Rudolf Carnap (1891-1970), interpreted philosophy as a logical analysis, for which he was primarily concerned with the analysis of the language of science, because he judged the empirical statements of science to be the only factually meaningful ones, as his early efforts in The Logical Structure of the World (1928: Translations, 1967) for which his intention was to have as a controlling desire something that transcends ones present capacity for acquiring to endeavour in view of a purposive point. At which time, to reduce all knowledge claims into the language of sense data, whereby his developing preference for language described behaviours (physicalistic language), and just as his work on the syntax of scientific language in The Logical Syntax of Language (1934, translated 1937). His various treatments of the verifiability, testability, or confirmability of empirical statements are testimonies to his belief that the problems of philosophy are reducible to the problems of language.

Carnaps principle of tolerance, or the conventionality of language forms, emphasized freedom and variety in language construction. He was particularly interested in the construction of formal, logical systems. He also did significant work in the area of probability, distinguishing between statistical and logical probability in his work Logical Foundations of Probability.

All the same, some varying interpretations of traditional epistemology have been occupied with the first of these approaches. Various types of belief were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derived from perception were proposed by many as immune to doubt. But what they all had in common were that empirical knowledge began with the data of the senses that it was safe from sceptical challenge and that a further superstructure of knowledge was to be built on this firm basis. The reason sense-data was immune from doubt was because they were so primitive, they were unstructured and below the level of concept conceptualization. Once they were given structure and conceptualized, they were no longer safe from sceptical challenge. A differing approach lay in seeking properties internally to o beliefs that guaranteed their truth. Any belief possessing such properties could be seen to be immune to doubt. Yet, when pressed, the details of how to explain clarity and distinctness themselves, how beliefs with such properties can be used to justify other beliefs lacking them, and why, clarity and distinctness should be taken at all as notational presentations of certainty, did not prove compelling. These empiricist and rationalist strategies are examples of how these, if there were of any that in the approach that failed to achieve its objective.

However, the Austrian philosopher Ludwig Wittgenstein (1889-1951), whose later approach to philosophy involved a careful examination of the way we actually use language, closely observing differences of context and meaning. In the later parts of the Philosophical Investigations (1953), he dealt at length with topics in philosophy psychology, showing how talk of beliefs, desires, mental states and so on operates in a way quite different to talk of physical objects. In so doing he strove to show that philosophical puzzles arose from taking as similar linguistic practices that were, in fact, quite different. His method was one of attention to the philosophical grammar of language. In, On Certainty (1969) this method was applied to epistemological topics, specifically the problem of scepticism.

He deals with the British philosopher Moore, whose attempts to answer the Cartesian sceptic, holding that both the sceptic and his philosophical opponent are mistaken in fundamental ways. The most fundamental point Wittgenstein makes against the sceptic are that doubt about absolutely everything is incoherent, to even articulate a sceptic challenge, one has to know the meaning of what is said 'If you are not certain of any fact, you cannot be certain of the meaning of your words either'. Doubt only makes sense in the context of things already known. The kind of doubt where everything is challenged is spurious. However, Moore is incorrect in thinking that a statement such as 'I know I cannot reasonably doubt such a statement. It doesn't make sense to say it is known either. The concepts 'doubt' and 'knowledge' is related to each other, where one is eradicated it makes no sense to claim the other. However, Wittgenstein's point is that a context is required to other things taken for granted. It makes sense to doubt given the context of knowledge, as it doesn't make sense to doubt for no-good reason: 'Doesn't one need grounds for 'doubt?'.

We, at most of times, took a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible. Either for any proposition at all, or for any proposition from some suspect family, ethics, theory, memory. Empirical judgement, etc. A major sceptical weapon is the possibility of upsetting events that cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible sources of our confidence. Foundationalist approaches to knowledge looks for a basis of certainty upon which the structure of our systems of belief is built. Others reject the coherence, without foundations.

The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. Continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.

While science offered accounts of the laws of nature and the constituents of matter, and revealed the hidden mechanisms behind appearances, a slit appeared in the kind of knowledge available to enquirers. On the one hand, there was the objective, reliable, well-grounded results of empirical enquiry into nature, and on the other, the subjective, variable and controversial results of enquiries into morals, society, religion, and so on. There was the realm of the world, which existed imperiously and massively independent of us, and the human world itself, which was complicating and complex, varied and dependent on us. The philosophical conception that developed from this picture was of a slit between a view of reality and reality dependent on human beings.

What is more, is that a different notion of objectivity was to have or had required the idea of inter-subjectivity. Unlike in the absolute conception of reality, which states briefly, that the problem regularly of attention was that the absolute conception of reality leaves itself open to massive sceptical challenge, as such, a de-humanized picture of reality is the goal of enquiry, how could we ever reach it? Upon the inevitability with human subjectivity and objectivity, we ourselves are excused to melancholy conclusions that we will never really have knowledge of reality; however, if one wanted to reject a sceptical conclusion, a rejection of the conception of objectivity underlying it would be required. Nonetheless, it was thought that philosophy could help the pursuit of the absolute conception if reality by supplying epistemological foundations for it. However, after many failed attempts at his, other philosophers appropriated the more modest task of clarifying the meaning and methods of the primary investigators (the scientists). Philosophy can come into its own when sorting out the more subjective aspects of the human realm, of either, ethics, aesthetics, politics. Finally, it goes without saying, what is distinctive of the investigation of the absolute conception is its disinterestedness, its cool objectivity, it demonstrable success in achieving results. It is purely theory - the acquisition of a true account of reality. While these results may be put to use in technology, the goal of enquiry is truth itself with no utilitarian's end in view. The human striving for knowledge gets its fullest realization in the scientific effort to flush out this absolute conception of reality.

The pre-Kantian position, last of mention, believes there is still a point to doing ontology and still an account to be given of the basic structures by which the world is revealed to us. Kant's anti-realism seems to drive from rejecting necessity in reality: Not to mention that the American philosopher Hilary Putnam (1926- ) endorses the view that necessity is relative to a description, so there is only necessity in being relative to language, not to reality. The English radical and feminist Mary Wollstonecraft (1759-97), says that even if we accept this (and there are in fact good reasons not to), it still doesn't yield ontological relativism. It just says that the world is contingent - nothing yet about the relative nature of that contingent world.

Advancing such, as preserving contends by sustaining operations to maintain that, at least, some significantly relevant inflow of quantities was differentiated of a positive incursion of values, whereby developments are, nonetheless, intermittently approved as subjective amounts in composite configurations of which all pertain of their construction. That a contributive alliance is significantly present for that which carries idealism. Such that, expound upon those that include subjective idealism, or the position to better call of immaterialism, and the meaningful associate with which the Irish idealist George Berkeley, has agreeably accorded under which to exist is to be perceived as transcendental idealism and absolute idealism. Idealism is opposed to the naturalistic beliefs that mind alone is separated from others but justly as inseparable of the universe, as a singularity with composite values that vary the beaten track whereby it is second to none, this permits to incorporate federations in the alignments of ours to be understood, if, and if not at all, but as a product of natural processes.

The pre-Kantian position - that the world had a definite, fixed, absolute nature that was not constituted by thought - has traditionally been called realism. When challenged by new anti-realist philosophies, it became an important issue to try to fix exactly what was meant by all these terms, such that realism, anti-realism, idealism and so on. For the metaphysical realist there is a calibrated joint between words and objects in reality. The metaphysical realist has to show that there is a single relation - the correct one - between concepts and mind-independent objects in reality. The American philosopher Hilary Putnam (1926- ) holds that only a magic theory of reference, with perhaps noetic rays connecting concepts and objects, could yield the unique connexion required. Instead, reference makes sense in the context of the unveiling signs for certain purposes. Before Kant there had been proposed, through which is called idealists - for example, different kinds of neo-Platonic or Berkeleys philosophy. In these systems there is a declination or denial of material reality in favour of mind. However, the kind of mind in question, usually the divine mind, guaranteed the absolute objectivity of reality. Kant's idealism differs from these earlier idealisms in blocking the possibility of the verbal exchange of this measure. The mind as voiced by Kant in the human mind and it isn't capable of unthinkable by us, or by any rational being. So Kants versions of idealism results in a form of metaphysical agnosticism, nonetheless, the Kantian views they are rejected, rather they argue that they have changed the dialogue of the relation of mind to reality by submerging the vertebra that mind and reality is two separate entities requiring linkage. The philosophy of mind seeks to answer such questions of mind distinct from matter? Can we define what it is to be conscious, and can we give principled reasons for deciding whether other creatures are conscious, or whether machines might be made so that they are conscious? What is thinking, feeling, experiences, remembering? Is it useful to divide the functions of the mind up, separating memory from intelligence, or rationality from sentiment, or do mental functions form an integrated whole? The dominant philosophers of mind in the stream of western tradition include varieties of physicalism and functionalism. In following the same direct pathway, in that the philosophy of mind, functionalism is the modern successor to behaviouralism, its early advocates were the American philosopher Hilary Putnam and Stellars, assimilating an integration of guiding principle under which we can define mental states by a triplet of relations: What typically causes them affectual causalities that they have on other mental states and what affects that they had toward behaviour. Still, functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or realization of the program the machine is running the principled advantages of functionalism, which include its calibrated joint with which the way we know of mental states both of ourselves and others, which is via their effectual behaviouralism and other mental states as with behaviouralism, critics charge that structurally complicated and complex items that do not bear mental states might. Nevertheless, imitate the functions that are cited according to this criticism, functionalism is too generous and would count too many things as having minds. It is also, queried to see mental similarities only when there is causal similarity, as when our actual practices of interpretation enable us to ascribe thoughts and derive to persons whose causal structure may be rather different from our own. It may then seem ad though beliefs and desires can be variably realized in causal architecture, just as much as they can be in different Neurophysiologic states.

The peripherally viewed homuncular functionalism seems to be an intelligent system, or mind, as may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in coordination with each other. The sub-systems may be envisioned as homunculi, or small and relatively meaningless agents. Wherefore, the archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make up a machine that can play chess, write dictionaries, etc.

Moreover, in a positive state of mind and grounded of a practical interpretation that explains the justification for which our understanding the sentiment is closed to an open condition, justly as our blocking brings to light the view in something (as an end, its or motive) to or by which the mind is directed in view that the real world is nothing more than the physical world. Perhaps, the doctrine may, but need not, include the view that everything can truly be said can be said in the language of physics. Physicalism is opposed to ontology's including abstract objects, such as possibilities, universals, or numbers, and to mental events and states, insofar as any of these are thought of as independent of physical things, events, and states. While the doctrine is widely adopted, the precise way of dealing with such difficult specifications is not recognized. Nor to accede in that which is entirely clear, still, how capacious a physical ontology can allow itself to be, for while physics does not talk in terms of many everyday objects and events, such as chairs, tables, money or colours, it ought to be consistent with a physicalist ideology to allow that such things exist.

Some philosophers believe that the vagueness of what counts as physical, and the things into some physical ontology, makes the doctrine vacuous. Others believe that it forms a substantive meta-physical position. Our common ways of framing the doctrine are in terms of supervenience. Whilst it is allowed that there are legitimate descriptions of things that do not talk of them in physical terms, it is claimed that any such truth s about them supervene upon the basic physical facts. However, supervenience has its own problems.

Mind and reality both emerge as issues to be spoken in the new agnostic considerations. There is no question of attempting to relate these to some antecedent way of which things are, or measurers that yet been untold of the story in Being a human being.

The most common modern manifestation of idealism is the view called linguistic idealism, which we create the world we inhabit by employing mind-dependent linguistics and social categories. The difficulty is to give a literal form to this view that does not conflict with the obvious fact that we do not create worlds, but find ourselves in one.

Of the leading polarities about which, much epistemology, and especially the theory of ethics, tends to revolve, the immediate view that some commitments are subjective and go back at least to the Sophists, and the way in which opinion varies with subjective constitution, the situation, perspective, etc., that is a constant theme in Greek scepticism, the individualist between the subjective source of judgment in an area, and their objective appearance. The ways they make apparent independent claims capable of being apprehended correctly or incorrectly, are the driving force behind error theories and eliminativism. Attempts to reconcile the two aspects include moderate anthropocentrism, and certain kinds of projectivism.

The standard opposition between those how affirmatively maintain of vindication and those manifestations for which something of a requisite submission and disavow the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals and moral or aesthetic properties are examples. A realist about a subject-matter 'S' may hold (1) overmuch in excess that the overflow of the kinds of things described by S exist: (2) that their existence is independent of us, or not an artifact of our minds, or our language or conceptual scheme, (3) that the statements we make in S are not reducible to about some different subject-matter, (4) that the statements we make in S have truth conditions, being straightforward description of aspects of the world and made true or false by facts in the world, (5) that we are able to attain truth about 'S', and that it is appropriate fully to believe things we claim in 'S'. Different oppositions focus on one or another of these claims. Eliminativists think the 'S'; Discourse should be rejected. Sceptics either deny that of (1) or deny our right to affirm it. Idealists and conceptualists disallow of (2) reductionists objects of all from which that has become of denial (3) while instrumentalists and projectivists deny (4), Constructive empiricalists deny (5) Other combinations are possible, and in many areas there are little consensuses on the exact way a reality/antireality dispute should be constructed. One reaction is that realism attempts to look over its own shoulder, i.e., that it believes that as well as making or refraining from making statements in 'S', we can fruitfully mount a philosophical gloss on what we are doing as we make such statements, and philosophers of a verificationist tendency have been suspicious of the possibility of this kind of metaphysical theorizing, if they are right, the debate vanishes, and that it does so is the claim of minimalism. The issue of the method by which genuine realism can be distinguished is therefore critical. Even our best theory at the moment is taken literally. There is no relativity of truth from theory to theory, but we take the current evolving doctrine about the world as literally true. After all, with respect of its theory-theory - like any theory that we actually hold - is a theory that after all, there is. That is a logical point, in that, everyone is a realist about what their own theory posited, precisely for what remains accountable, that is the point of the theory, to say what there is a continuing inspiration for back-to-nature movements, is for that what really exists.

There have been a great number of different sceptical positions in the history of philosophy. Some as persisting from the distant past of their sceptic viewed the suspension of judgment at the heart of scepticism as a description of an ethical position as held of view or way of regarding something reasonably sound. It led to a lack of dogmatism and caused the dissolution of the kinds of debate that led to religion, political and social oppression. Other philosophers have invoked hypothetical sceptics in their work to explore the nature of knowledge. Other philosophers advanced genuinely sceptical positions. Here are some global sceptics who hold we have no knowledge whatsoever. Others are doubtful about specific things: Whether there is an external world, whether there are other minds, whether we can have any moral knowledge, whether knowledge based on pure reasoning is viable. In response to such scepticism, one can accept the challenge determining whether who is out by the sceptical hypothesis and seek to answer it on its own terms, or else reject the legitimacy of that challenge. Therefore some philosophers looked for beliefs that were immune from doubt as the foundations of our knowledge of the external world, while others tried to explain that the demands made by the sceptic are in some sense mistaken and need not be taken seriously. Anyhow, all are given for what is common.

The American philosopher C.I. Lewis (1883-1946) was influenced by both Kants division of knowledge into that which is given and which processes the given, and pragmatism emphasis on the relation of thought to action. Fusing both these sources into a distinctive position, Lewis rejected the shape dichotomies of both theory-practice and fact-value. He conceived of philosophy as the investigation of the categories by which we think about reality. He denied that experience conceptualized by categorized realities. That way we think about reality is socially and historically shaped. Concepts have meanings that signify the relevance to something reassembling a forming configuration that appears orderly and of a proper conditional state, by which human beings are a product of human interactions with the world. Theory is infected by practice and facts are shaped by values. Concept structures our experience and reflects our interests, attitudes and needs. The distinctive role for philosophy is to investigate the criteria of classification and principles of interpretation we use in our multifarious interactions with the world. Specific issues come up for individual sciences, which will be the philosophy of that science, but there are also common issues for all sciences and non-scientific activities, reflection on which issues is the specific task of philosophy.

The framework idea in Lewis is that of the system of categories by which we mediate reality to ourselves: 'The problem of metaphysics is the problem of the categories' and 'experience doesn't categorize itself' and 'the categories are ways of dealing with what is given to the mind.' Such a framework can change across societies and historical periods: 'our categories are almost as much a social product as is language, and in something like the same sense.' Lewis, however, didn't specifically thioamides the question that there could be alterative sets of such categories, but he did acknowledge the possibility.

Sharing some common sources with Lewis, the German philosopher Rudolf Carnap (1891-1970) articulated a doctrine of linguistic frameworks that was radically relativistic its implications. Carnap had a deflationist view of philosophy, that is, he believed that philosophy had no role in telling us truth about reality, but rather played its part in clarifying meanings for scientists. Now some philosophers believed that this clarification project was itself to lead into further philosophical investigations and special philosophical truth about meaning, truth, necessity and so on, however Carnap rejected this view. Now Carnaps actual position is fewer libertarians than it actually appears, since he was concerned to allow different systems of logic that might have different properties useful to scientists working on diverse problems. However, he doesn't envisage any deductive constraints on the construction of logical systems, but he does envisage practical constraints. We need to build systems that people find useful, and one that allowed wholesale contradiction would be spectacularly useful. There are other more technical problems with this conventionalism.

Rudolf Carnap (1891-1970), interpreted philosophy as a logical analysis, for which he was primarily concerned with the analysis of the language of science, because he judged the empirical statements of science to be the only factually meaningful ones, as his early efforts in The Logical Structure of the World (1928: Translations, 1967) for which his intention was to have as a controlling desire something that transcends ones present capacity for acquiring to endeavour in view of a purposive point. At which time, to reduce all knowledge claims into the language of sense data, whereby his developing preference for language described behaviour (physicalistic language), and just as his work on the syntax of scientific language in The Logical Syntax of Language (1934, translated 1937). His various treatments of the verifiability, testability, or confirmability of empirical statements are testimonies to his belief that the problems of philosophy are reducible to the problems of language.

Carnaps principle of tolerance, or the conventionality of language forms, emphasized freedom and variety in language construction. He was particularly interested in the construction of formal, logical systems. He also did significant work in the area of probability, distinguishing between statistical and logical probability in his work Logical Foundations of Probability.

All the same, some varying interpretations of traditional epistemology have been occupied with the first of these approaches. Various types of belief were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derived from perception were proposed by many as immune to doubt. But what they all had in common were that empirical knowledge began with the data of the senses that it was safe from sceptical challenge and that a further superstructure of knowledge was to be built on this firm basis. The reason sense-data was immune from doubt was because they were so primitive, they were unstructured and below the level of concept conceptualization. Once they were given structure and conceptualized, they were no longer safe from sceptical challenge. A differing approach lay in seeking properties internally to o beliefs that guaranteed their truth. Any belief possessing such properties could be seen to be immune to doubt. Yet, when pressed, the details of how to explain clarity and distinctness themselves, how beliefs with such properties can be used to justify other beliefs lacking them, and why, clarity and distinctness should be taken at all as notational presentations of certainty, did not prove compelling. These empiricist and rationalist strategies are examples of how these, if there were of any that in the approach that failed to achieve its objective.

However, the Austrian philosopher Ludwig Wittgenstein (1889-1951), whose later approach to philosophy involved a careful examination of the way we actually use language, closely observing differences of context and meaning. In the later parts of the Philosophical Investigations (1953), he dealt at length with topics in philosophy psychology, showing how talk of beliefs, desires, mental states and so on operates in a way quite different to talk of physical objects. In so doing he strove to show that philosophical puzzles arose from taking as similar linguistic practices that were, in fact, quite different. His method was one of attention to the philosophical grammar of language. In, On Certainty (1969) this method was applied to epistemological topics, specifically the problem of scepticism.

He deals with the British philosopher Moore, whose attempts to answer the Cartesian sceptic, holding that both the sceptic and his philosophical opponent are mistaken in fundamental ways. The most fundamental point Wittgenstein makes against the sceptic are that doubt about absolutely everything is incoherent, to even articulate a sceptic challenge, one has to know the meaning of what is said 'If you are not certain of any fact, you cannot be certain of the meaning of your words either'. Doubt only makes sense in the context of things already known. The kind of doubt where everything is challenged is spurious. However, Moore is incorrect in thinking that a statement such as 'I know I cannot reasonably doubt such a statement, but it doesn't make sense to say it is known either. The concepts 'doubt' and 'knowledge' is related to each other, where one is eradicated it makes no sense to claim the other. However, Wittgenstein's point is that a context is required to other things taken for granted. It makes sense to doubt given the context of knowledge, as it doesn't make sense to doubt for no-good reason.

We, at most of times, took a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainties are oftentimes possible, or ever possible within the academic family. A major sceptical weapon is the possibility of upsetting events that cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible sources of our confidence. Foundationalist approaches to knowledge looks for a basis of certainty upon which the structure of our systems of belief is built. Others reject the coherence, without foundations.

Nevertheless, scepticism is the view that we lack knowledge, but it can be 'local', for example, the view could be that we lack all knowledge of the future because we do not know that the future will resemble the past, or we could be sceptical about the existence of 'other minds'. But there is another view - the absolute globular view that we do not have any knowledge whatsoever.

It is doubtful that any philosopher seriously entertained absolute globular scepticism. Even the Pyrrhonist sceptics who held that we should refrain from assenting to any non-evident preposition had no such hesitancy about assenting to 'the evident'. The non-evident are any belief that requires evidence in order to be epistemically acceptable, i.e., acceptable because it is warranted. Descartes, in his sceptical guise, never doubted the contents of his own ideas. The issue for him was whether they 'corresponded,' to anything beyond ideas.

But Pyrrhonist and Cartesian forms of virtual globular skepticism have been held and defended. Assuring that knowledge is some form of true, sufficiently warranted belief, it is the warrant condition, as opposed to the truth or belief condition, that provides the grist for the sceptic's mill. The Pyrrhonist will suggest that no non-evident, empirical proposition is sufficiently warranted because its denial will be equally warranted. A Cartesian sceptic will argue that no empirical proposition about anything other than one's own mind and its content is sufficiently warranted because there are always legitimate grounds for doubting it. Thus, an essential difference between the two views concerns the stringency of the requirements for a belief's being sufficiently warranted to count as knowledge.

Cartesian scepticism, more impressed with Descants' argument for scepticism than his own rely, holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to justifiably deny that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects which we normally think affect our senses. Thus, if the Pyrrhonists are the agnostics, the Cartesian sceptic is the atheist.

Because the Pyrrhonist requires much less of a belief in order for it to be certified as knowledge than does the Cartesian, the arguments for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing any preposition than for denying it. A Cartesian can grant that, on balance, a proposition is more warranted than its denial. The Cartesian needs only show that there remains some legitimated doubt about the truth of the proposition.

Thus, in assessing scepticism, the issues to consider is such that we are aptly taken to values that improve and our judgmental reasons for believing a non-evident proposition than there are for believing its negation? Does knowledge, at least in some of its forms, require certainty? And, if so, is any non-evident proposition certain?

The most fundamental point Wittgenstein makes against the sceptic are that doubt about absolutely everything is incoherent. To even articulate a sceptical challenge, one has to know that to know the meaning of what is said if you are certain of any fact, you cannot be certain of the meaning of your words either. Doubt only makes sense in the context of things already known. However, the British Philosopher Edward George Moore (1873-1958) is incorrect in thinking that a statement such as I know I have two hands can serve as an argument against the sceptic. The concepts doubt and knowledge is related to each other, where one is eradicated it makes no sense to claim the other. But why couldn't we, as the elite of the Homo’s species, find of some reasonable doubt for any existence of ones limbs? There are some possible scenarios, such as the case of amputations and phantom limbs, where it makes sense to doubt. However, Wittgenstein's point is that a context is required of other things taken for granted, It makes sense to doubt given the context of knowledge about amputation and phantom limbs, it doesn't make sense to doubt for no-good reason: Doesn't one need grounds for doubt?

For such that we can find of value in Wittgenstein's thought but who reject his quietism about philosophy, his rejection of philosophical scepticism is a useful prologue to more systematic work. Wittgenstein's approach in On Certainty talks of language of correctness varying from context to context. Just as Wittgenstein resisted the view that there is a single transcendental language game that governs all others, so some systematic philosophers after Wittgenstein have argued for a multiplicity of standards of correctness, and not a single overall dominant one.

As the name given to the philosophical movement inaugurated by René Descartes (after 'Cartesius', the Lain version of his name). The min features of Cartesianism are (1) the use of methodical doubt as a tool for testing beliefs and reaching certainty (2) a metaphysical system which start from the subject's indubitable awareness of his own existence, (3) a theory of 'clear and distinct ideas' based on the innate concepts and prepositions implanted in the soul by God (these include the ideas of mathematics, which Desecrates takes to be the fundamental building blocks of science): (4) the theory now known as 'dualism' - that there are two fundamental incompatible kinds of substance in the universe, mind (or thinking substance (and matter, or extended substance in the universe). A Corollary of this last theory is that human beings are radically heterogeneous beings, composed of an unextended, immaterial consciousness uniting about those of purely physical machinery - the body. Another key element in Cartesian dualism is the claim that the mind has perfect and transparent awareness of its own nature or essence. The main features of Cartesianism are (1) the use of methodical doubt as a tool for testing beliefs and reaching certainty; (2) a metaphysical system which starts from the subject's indubitable awareness of his own existence; (3) a theory of 'clear and distinct ideas' based on the innate concepts and propositions implanted in the soul by God (these include the ideas of mathematics, which Descartes takes to be the fundamental building blocks of science); (4) the theory now known as 'dualism' - that there are two fundamentally incompatible kinds of substance in the universe, mind (or extended substance). A corollary of this last theory is that human beings are radically heterogeneous beings, composed of an unextended, immaterial consciousness united to a piece of purely physical machinery - the body. Another key element in Cartesian dualism is the claim that the mind has perfect and transparent awareness of its own nature or essence.

As and all. other mental states and events are with content, it is important to distinguish between the properties with which an experience represents and his properties which it possesses. To talk of the representational prosperities of an experience is to say something about its content, not to attribute those properties to the experience itself. Like ever y other experience, a visual experience of a pink square is a mental even t, and it is therefore not itself pink or square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property which it possesses, and it may even do so in which it possesses, and it may even do so in virtue of rapidly changing (complex) experience representing something as changing rapidly, but this is the exception and not the rule.

Which properties can be (directly) represented in sense experience is subject to our attemptive grasp to it's though, is, nonetheless, of that what traditionalists include only properties whose presence could not be doubted by subject have in appropriate experiences, e.g., colours and shape in the case of visual experience, hardiness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experience to serve as logically certain foundations for knowledge. Its inference to the immediate objects of perceptual awareness, such as colours patches and shapes, usually supposed distinct form surfaces of physical objects. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are. They are objects that change in our perceptual fields when conditions of perception change and physical objects remain constant.

All the same, critics of the notion question whether, just because physical objects can appear other than they are, there must be private, mental objects that have all the characterized physical objects that they seem to have. There are also problems regarding the individuation and duration of sense-data and their relations to physical surfaces of objects we perceive. Contemporary proponents counter the speaking only of how things appear cannot capture the full structure within perceptual experience captured by talk of apparent objects and their qualities.

These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearance, for experience deems not to present us with base properties, but with properties embodied in the individual. The view that objects of experience as Meinongian objects accommodates this point. It is also attractive insofar as (1) It allows experience to represent proprieties, other than traditional sensory qualities, and (2) It allows for the identification of objects of experience and objects of perception in the case of experience which constitute perceptual representation.

According to the 'act-object' analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness. This is meant to apply not only to perceptions, which have material objects, but also to experiences, which do not. Such experiences nonetheless, appear to represent something, and their objects are supposed to be whatever it is that they represent. 'Act-object' theorists may differ on the nature of objects of experience, which have been treated as properties, Meinongian objects (which may not exist or have ant form of being), and, more commonly, private mental entities with sensory qualities. (The term 'sense-data' is now usually applied to the latter, but has also been used as a general term for objects of sense experiences as in the work of G.E. Moore. 'Act-object' theorists may also differ on the relationship between objects of experience ands the objects of perception. In terms or representative realism, objects of perception (of which we are 'indirectly aware' (are always distinct from objects of experience (of which we are 'directly aware') Meinongians, however, may simply treat objects of perception as existing objects of experience.

Nevertheless, in accord with the 'act-object''analysis of experience (which is a special standing of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be placed on the table for our consideration, is that n support if this view, one phenomenological and the other semantic.

It may follow that the phenomenological argument, even if nothing beyond the expedience answers to it, we seem to be presented with something through the experience (which is it diaphanous). The object of the experience is whatever is so presented to us - be, it some sorted an individuality of a thing, an event, or a state of affairs.

The semantic argument is that objects of experience are required in order to make sense of certain features f our talk about experience, including, in particularly as such of (1) Simple attributions of experience, e.g., 'Rod is experiencing a pink square, this seems to be relational, and (2) We appear to refer to objects of experience and to attribute properties to the m, e.g., 'The after image which John experienced was green'. (3) We appear to quantify over objects of experience, e.g., 'Macbeth saw something which his wife did not'.

The 'act-object' analysis faces several problems concerning the status of objects of experience. Currently the most common view in that they are sense-data - private mental entities which actually posses the traditional sensory qualities represented by the experience of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-data any actually has a determinable property without having any determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experience can have contradictory contents. A case in point, is that waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate our vision on a nearly rock, you are likely to have an experience of the rock's moving upwards while it remains in exactly the same place as stated. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing, as opposed to having exactly similar experiences appears to have an answer only on the assumption that the experience concerns are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-data theory. It could be positive on the other versions of the act/object analysis, depending on the facts of its standing.

In view of the aforementioned, for which the act/object analysis should be reassessed. The phenomenological argument is not, no reflection, convincing, for it is easy to present that any experience appears to present us with an object without accepting that it actually does. The semantic argument is more impassive, but is nonetheless, less answerable. The seeming relational structure of attributions of experience is a challenge dealt with its connection with the adverbial theory. Apparently reference to and quantification over objects of experience can be handled by analyzing them a reference to experiences themselves and quantification over experiences tacitly typed according to content. (Thus, 'the after image which John experienced was green' becomes 'John's after image experience was an experience of green', and 'Macbeth saw something which his wife did not see' becomes 'Macbeth had a visual experience which his wife did not have'.)

One of the leading polarities about which much epistemology, and deistically the theory of ethics tends to revolve openly to be available use or consideration or decision within reaches of the many-representational surroundings existently pointing of a milieu founded by 'objectivism' and 'subjectivism'.

Most western philosophers have been content with dualism between, on the one hand, the world, and objects of experience is, however, this dualism containing a trap, since it can easily seem impossible to give any coherent account of the relation between the two. This has been a permanent motivation towards either dualism, which brings objects back into the mind of the subject, or, some kind of materialism which sees the subject as little more than one among others. Other options include 'neutral monism'.

The view that some commitments are subjective goes back at least to the Stoics, and the way in which opinion varies with subjective constitution, a situation, perspective, etc., is a constant theme in Greek scepticism. The misfits between the subjective source of judgment in an area, and their objective appearance, or the way they make apparently independent claims capable of being apprehended correctly or incorrectly, are the driving force behind error theories and 'eliminativism'. Attempts to reconcile the two aspects include moderate 'anthropocentrism', and certain kinds of 'projectivism'.

The contrast between the subjective and the objective is made in both the epistemic and the ontological domains. In the former it is often identified with the distinction between the intrapersonal and the interpersonal, or with that between whose resolution depends on the psychology of the person in question and those not thus dependent, or, sometimes, with the distinction between the biassed and the impartial. Thus, an objective question might be one answerable by a method usable by a content investigator, while a subjective question would be answerable only from the questioner's point of view. In the ontological domain, the subjective-objective contrast is often between what is and what is it in mind-dependent: Secondary qualities, e.g., colours, have been thought subjective owing to their apparent variability with observation conditions. The truth of a preopsition, for instance, apart from certain prepositions about oneself, would be objective if it is independent of the perceptive, especially the beliefs, of those fudging it. Truth would be subjective if it lacks such independence, say because it is a construct from justified beliefs, e.g., those well-confirmed by observation.

One notion of objectivity might be basic and the other derivative. If the epistemic notion is basic, then the criteria for objectivity in the ontological sense derive from considerations of justification: An objective question is one answerable by a procedure that yields (adequate) justification for one's answer, and mind-independence is a matter of amenability to such a method. If, on the other hand, the ontological notion is basic, the criteria for an interpersonal method and its objective use are a matter of its mind-independence and tendency to lead to objective truth, say it's applying to external objects and yielding predicative success. Since the use of these criteria requires employing the methods which, on the epistemic conception, define objectivity - most notably scientific methods - have no similarities especially, the dependence obtained in the other direction, the epistemic notion is often taken as basic.

In epistemology, the subjective-objective contrast arises above all for the concept of justification and its relatives. Externalism, particularly reliabilism, construes justification objectistically, since, for reliabilism, truth-conduciveness (non-subjectively conceived) is central for justified belief. Internalism may or may not construe justification subjectivistically, depending on whether the proposed epistemic standards are interpersonally grounded. There are also various kinds of subjectivity; justification may, e.g., be grounded in one's considered standards or simply in what one believes to be sound. On the former view, m y justified belief's accord with my considered standards whether or not I think them justified, on the latter, my thinking them justified makes it so.

William Orman von Quine (1908-2000), who is, yet, another American philosopher and differs in philosophies from Wittgenstein's philosophy in a number of ways, Nevertheless, traditional philosophy believed that it had a special task in providing foundations for other disciplines, specifically the natural science, for not to see of any bearing toward a distinction between philosophical scientific work, of what seems a labyrinth of theoretical beliefs that are seamlessly intuited. Others work at a more theoretical level, enquiring into language, knowledge and our general categories of reality. Yet, for the American philosopher William von Orman Quine (1909-2000) there are no special methods available to philosophy that isn't there for scientists. He rejects introspective knowledge, but also conceptual analysis as the special preserve of philosophers, as there are no special philosophical methods.

By citing scientific (psychological) evidence against the sceptic, Quine is engaging in a descriptive account of the acquisition of knowledge, but ignoring the normative question of whether such accounts are justified or truth-conducive. Therefore he has changed the subject, but, nonetheless, Quineans reply by showing that normative issues can and do arise in this naturalized context. Tracing the connections between observation sentences and theoretical sentences, showing how the former support the latter, are a way of answering the normative question,

For both Wittgenstein and Quine have shown ways of responding to scepticism that doesn't take the sceptics challenge at face value. Wittgenstein undermines the possibility of universal doubt, showing that doubt presupposes some kind of belief, as Quine holds that the sceptics use of scientific information to raise the sceptical challenge that allows the use of scientific information in response. However, both approaches require significant changes in the practice of philosophy. Wittgenstein's approach has led to a conception of philosophy as therapy. Quines conception holds that there is no genuine philosophy independent of scientific knowledge.

How this elates to scepticism is that skepticism is tackled using scientific means. Quine holds that this is not question-begging because the sceptical challenge arises using scientific knowledge. For example, it is precisely because the sceptic has knowledge of visual distortion from optics that he can raise the problem of the possibility of deception. The sceptical question is not mistaken, according to Quine; it is rather that the sceptical rejection of knowledge is an overreaction. By citing scientific (psychology) evidence against the sceptic, Quine is but ignoring the normative question of whether such accounts are justified or truth-conductive. Therefore, he has changed the subject. Quineans reply by showing that normative issues can and do arise in the naturalized context. Tracing the connection between observation sentences and theoretical sentences, showing how the former support the latter, are a way of answering the normative question.

So, then, both Wittgenstein and Quine have shown ways of responding to scepticism that don't take the sceptic's challenge at face value. Wittgenstein undermines the possibility of universal doubt, showing that doubt presupposes some kind of belief. Quine holds that the sceptics use of scientific information to raise the sceptical challenge acknowledges for we are of sustained by scientific information in response. However, both approaches require significant changes in the practice of philosophy. Wittgenstein's approach has led to a conception of philosophy as therapy. Wittgensteinian therapies, there are those who use Wittgenstein's insights as a means to further more systematic philosophical goals, likewise there are those who accept some of Quince's conclusions without wholeheartedly buying into his scientism. That they have shown different ways of resisting the sceptic's sitting the agenda for epistemology has been significant for the practice of contemporary epistemology.

Post-positivistic philosophers who rejected traditional realist metaphysics needed to find some kind of argument, other than verificationism, to reject it. They found such arguments in philosophy of language, particularly in accounts of reference. Explaining how is a reality structured independently of thought, although the main idea is that the structures and identity condition we attributed to reality derive from the language we use, and that such structures and identity conditions are not determined by reality itself, but from decisions we make: They are rather revelatory of the world-as-related-to-by-us. The identity of the world is therefore relative, not absolute.

Common-sense realism holds that most of the entities we think exist in a common-sense fashion really do exist. Scientific realism holds that most of the entities postulated by science likewise exist, and existence in question is independent of my constitutive role we might have. The hypothesis of realism explains why our experience is the way it is, as we experience the world thus-and-so because the world really is that way. It is the simplest and most efficient way of accounting for our experience of reality. Fundamentally, from an early age we come to believe that such objects as stones, trees, and cats exist. Further, we believe that these objects exist even when we perceive them and that they do not depend for their existence on our opinions or on anything mental.

Our theories about the world are instruments we use for making predictions about observations. They provide a structure in which we interpret, understand, systematize and unify our relationship as binding with the world, rooted in our observational linkage to that world. How the world is understood emerges only in the context of these theories. Nonetheless, we treat such theories as the truth, it is the best one we have. We have no external, superior vantage point outside theory from which we can judge the situation. Unlike the traditional kind, which attempts to articulate the ultimate nature of reality independent of our theorizing, justly as the American philosopher Willard Quine (1908-2000) takes on board the view that ontology is relative to theory, and specifically that reference is relative to the linguistic structures used to articulate it. The basic contention is that argument impinges on choice of theory, when bringing forward considerations about whether one way of construing reality is better than another it is an argument about which theory one prefers.

In relation to the scientific impersonal view of the world, the American philosopher Herbert Davidson (1917-2003) describes himself readily as a realist. However, he differs from both the traditional scientific realist and from Quineans relativism in important ways. His acceptance of the relativizing respects away from reductive scientific realism, but close to sophisticated realism. His rejection of scientism distances him from Quine, while Quine can accept s possibilities various theoretically intricate ontology's, the English philosopher Frederick Strawson (1919- ) will want to place shackles upon the range of possibilities available to us. The shackles come from the kind of being we are with the cognitive capacities we have, however, for Strawson the shackle is internal to reason. He is sufficiently Kantian to argue that the concepts we use and the connections between them are limited by the kinds of being we are in relation to or environment. He is wary of affirming the role of the environment, understood as non ~ conceptualized, in fixing the application of our concepts, so he doesn't appeal to the world as readily as realists do, but neither does he accept the range of theoretical options for ontological relativism, as presented by Quine. There are constraints on our thought, but constraints come from both mind and world. However, there is no easy, uncontested or non-theoretical account of what things are and how the constraints work.

Both Wittgenstein and Quine have shown ways of responding to scepticism that don't take the sceptics challenge at face value, as Wittgenstein undermines the possibility of universal doubt, showing that doubt presupposes some kind of belief, while Quine holds that the sceptics us e of scientific information to raise the sceptical challenge permit us the use of scientific information in response, least of mention, both approaches require significant changes in the practice of philosophy. Quines conception holds that there is no genuine philosophy independent of scientific knowledge. Where Wittgenstein's approach has led to a conception of philosophy as a therapeutic religion. Scepticism and relativism differ, in that alternative accounts of knowledge are legitimate. Scepticism holds that the existence of alternatives blocks are a slim partially in the possibilities of knowledge, but what kinds of alternatives are being at present, as to answer these questions, we are for the main issues founded in contemporary epistemology. The history of science, least of mention, indicates that the postulates of rationality, generalizability, and systematizability have been rather consistently vindicated. While we do not dismiss the prospect that theory and observation can be conditioned by extra-scientific cultural factors, this does not finally compromise the objectivity of scientific knowledge. Extra-scientific cultural influences are important aspects of the study of the history and evolution of scientific thought, but the progress of science is not, in this view, ultimately directed or governed by such considerations.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be proven in scientific terms and what can be reasonably inferred in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet, of those that are immediately responsible for evaluating the benefits and risks seem associated with the use of these technologies, much less is their potential impact on human needs and values, and normally have an expertise on only one side of a doubled-cultural divide. Perhaps, more important, many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact of nature entitled as non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this background can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer of the amounts of background implications should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this common function in an effort to close the circle, resolves the equations of eternity and complete of the universe to obtainably gain by in its unification, under which it holds of all things binding within.

A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the science of man began to probe into human motivation and emotion. For such are these, which French moralistes, or Hutcheson, Hume, Smith and Kant, are the basis in the prime task as to delineate the variety of human reactions and motivations, nonetheless, such an inquiry would locate our varying propensities for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us.

In some moral systems, notably that of Immanuel Kant, stipulates of the real moral worth that comes only with interactivity, justly because it is right. However, if you do what is purposively becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or sympathy. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly, but those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of situations that weigh heavily on ones side or another.

As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved she or he was not considering the subjects fault the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in them, such as of utilitarianism, to espouse various kinds may, perhaps, be entered upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.

Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be that they are the edicts of a divine lawmaker, or that they are truth of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.

In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St. Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of the Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism, its law stands as afar and above, and least is as apart from the activities of human representation. It constitutes an objective set of principles that can be seen as in and for themselves by means of natural usages or by reason itself, additionally, (in religious verses of them), that express of Gods' will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and Gods' will. Grothius, for instance, side with the view that the content of natural law is independent of any will, including that of God.

Nonetheless, the subjectivity of our mind affects our perceptions of the world held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. In that respect are mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that does not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodictically linked to the object. When I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject at that place are no objects, and without objects there is no subject. This interdependence, however, is not to be understood for a dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

Linguistic Philosophy, 20th ~ century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.

A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.

By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th ~ century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English ~ speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.

Austrian ~ born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico ~ philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico ~ Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.

The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian ~ born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Adaptational contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered. Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.



EDMUND HUSSERL (1959 ~ 1938)





Edmund Husserl (1859 ~ 1938), German philosopher, founder of phenomenology. and as a philosopher constitutive contribution was of phenomenology. This 20th ~ century philosophical movement is dedicated to the description of phenomena as they present themselves through perception to the conscious mind.

Husserl was born in Prossnitz, Moravia (now in the Czech Republic), on April 8, 1859. He studied science, philosophy, and mathematics at the universities of Leipzig, Berlin, and Vienna and wrote his doctoral thesis on the calculus of variations. He became interested in the psychological basis of mathematics and, shortly after becoming a lecturer in philosophy at the University of Halle, wrote his first book, Philosophie der Arithmetik (1891). At that time he maintained that the truths of mathematics have validity regardless of the way people come to discover and believe in them.

Husserl then argued against his early position, which he called psychologism, in Logical Investigations (1900 ~ 1901; trans. 1970). In this book, regarded as a radical departure in philosophy, he contended that the philosopher's task is to contemplate the essences of things, and that the essence of an object can be arrived at by systematically varying that object in the imagination. Husserl noted that consciousness is always directed toward something. He called this directedness intentionality and argued that consciousness contains ideal, unchanging structures called meanings, which determine what object the mind is directed toward at any given time.

During his tenure (1901 ~ 1916) at the University of Göttingen, Husserl attracted many students, who began to form a distinct phenomenological school, and he wrote his most influential work, Ideas: A General Introduction to Pure Phenomenology (1913; trans. 1931). In this book Husserl introduced the term phenomenological reduction for his method of reflection on the meanings the mind employs when it contemplates an object. Because this method concentrates on meanings that are in the mind, whether or not the object present to consciousness actually exists, Husserl said the method involved “bracketing existence,” that is, setting aside the question of the real existence of the contemplated object. He proceeded to give detailed analyses of the mental structures involved in perceiving particular types of objects, describing in detail, for instance, his perception of the apple tree in his garden. Thus, although phenomenology does not assume the existence of anything, it is nonetheless a descriptive discipline; according to Husserl, phenomenology is devoted, not to inventing theories, but rather to describing the “things themselves.”

After 1916 Husserl taught at the University of Freiburg. Phenomenology had been criticized as an essentially solipsistic method, confining the philosopher to the contemplation of private meanings, so in Cartesian Meditations (1931; trans. 1960), Husserl attempted to show how the individual consciousness can be directed toward other minds, society, and history. Husserl died in Freiburg on April 26, 1938.

Husserl's phenomenology had a great influence on a younger colleague at Freiburg, Martin Heidegger, who developed existential phenomenology, and Jean ~ Paul Sartre and French existentialism (see Existentialism). Phenomenology remains one of the most vigorous tendencies in contemporary philosophy, and its impact has also been felt in theology, linguistics, psychology, and the social sciences.

The founder of phenomenology, German philosopher Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie (1913; Ideas: A General Introduction to Pure Phenomenology,1931). The early followers of Husserl such as German philosopher Max Scheler, influenced by his previous book, Logische Untersuchungen (two volumes, 1900 and 1901; Logical Investigations, 1970), claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structures are the proper object of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent as well as real objects, Husserl noted that phenomenological reflection does not presuppose that anything exists, but rather amounts to a “bracketing of existence” ~ that is, setting aside the question of the real existence of the contemplated object.

The German philosopher Martin Heidegger greatly influenced the modern philosophy movements of phenomenology and existentialism. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger was born in Messkirch, Baden. He studied Roman Catholic theology and then philosophy at the University of Freiburg, where he was an assistant to Edmund Husserl, the founder of phenomenology. Heidegger began teaching at Freiburg in 1915. From 1923 to 1928 he taught at Marburg University. He then returned to Freiburg in 1928, inheriting Husserl's position as professor of philosophy. Because of his public support of Adolf Hitler and the Nazi Party in 1933 and 1934, Heidegger's professional activities were restricted in 1945, and controversy surrounded his university standing until his retirement in 1959.

German philosopher Martin Heidegger was instrumental in the development of the 20th ~ century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean ~ Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).

open sidebar

Besides Husserl, Heidegger was especially influenced by the pre ~ Socratics (see Greek Philosophy; Philosophy), by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. In his most important and influential work, Sein und Zeit (Being and Time, 1927), Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of “being” human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural as well as natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.

The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.

After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning—a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.

Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean ~ Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English ~ speaking countries worldwide.

German philosopher Martin Heidegger was instrumental in the development of the 20th ~ century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean ~ Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).



Existentialism was the, philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries. Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th ~ century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialist have argued that no objective, rational basis can be found for moral decisions. The 19th ~ century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.

All existentialist have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialist suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialist cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialist, is the freedom to choose. Existentialist have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th ~ century French philosopher Jean ~ Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialist have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that it is spiritually crucial to recognize that one experiences not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th ~ century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th ~ century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Nineteenth ~ century Danish philosopher Søren Kierkegaard played a major role in the development of existentialist thought. Kierkegaard criticized the popular systematic method of rational philosophy advocated by German Georg Wilhelm Friedrich Hegel. He emphasized the absurdity inherent in human life and questioned how any systematic philosophy could apply to the ambiguous human condition. In Kierkegaard’s deliberately unsystematic works, he explained that each individual should attempt an intense examination of his or her own existence.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th ~ century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a “leap of faith” into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.

Danish religious philosopher Søren Kierkegaard rejected the all ~ encompassing, analytical philosophical systems of such 19th ~ century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.

One of the most controversial works of 19th ~ century philosophy, Thus Spake Zarathustra (1883 ~ 1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life ~ affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo ~ Christian moral tradition in favour of a heroic pagan ideal.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th ~ century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.

Twentieth ~ century French intellectual Jean ~ Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th ~ century theology. The 20th ~ century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th ~ century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879 ~ 1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.

Twentieth ~ century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th ~ century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self ~ destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879 ~ 80), “We must love life more than the meaning of it.”

The opening lines of Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864)- “I am a sick man . . . I am a spiteful man”-are among the most famous in 19th ~ century literature. Published five years after his release from prison and involuntary military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

Jean ~ Paul Sartre, had been a twentieth ~ century French intellectual Jean ~ Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940 ~ 41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so ~ called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.

Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.

In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.

In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.

Søren Aabye Kierkegaard (1813 ~ 1855), Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.

Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this “leap of faith” as the essence of Christianity.

Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt ~ ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy (see below) and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17 ~ year ~ old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.

Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th ~ century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.

Danish religious philosopher Søren Kierkegaard rejected the all ~ encompassing, analytical philosophical systems of such 19th ~ century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.

Kierkegaard maintained that systematic philosophy not only imposed a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.

In his first major work, Either/Or, trans. 1944, Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is a refined hedonism, consisting of a search for pleasure and a cultivation of mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focussed on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1 ~ 19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This “suspension of the ethical,” as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar “leap of faith” into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.

Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly sombre view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.

Kierkegaard's influence was at first confined to Scandinavia and to German ~ speaking Europe, where his work had a strong impact on Protestant theology and on such writers as the 20th ~ century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.

Maurice Merleau ~ Ponty was an existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at the Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment (1942; trans. 1963), a critique of behaviourism. His major work, Phenomenology of Perception (1945; trans. 1962), is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism ~ the view that the working of the human mind can be understood in terms of rules or programs. It is also a telling critique of the existentialism of his contemporary, Jean ~ Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our embodiment.

With Sartre and Simone de Beauvoir, Merleau ~ Ponty founded an influential postwar French journal, Les Temps Modernes. His brilliant and timely essays on art, film, politics, psychology, and religion, first published in this journal, were later collected in Sense and Nonsense (1948; trans. 1964). At the time of his death, he was working on a book, The Visible and the Invisible (1964; trans. 1968), arguing that the whole perceptual world has the sort of organic unity he had earlier attributed to the body and to works of art.

Analytic and Linguistic Philosophy, had been a 20th ~ century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.

A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.

By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th ~ century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English ~ speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.

Austrian ~ born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico ~ philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico ~ Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.

The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian ~ born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.

Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico ~ philosophicus (1921; trans. 1922), a work he then believed provided the “final solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. posthumously 1953, trans. 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”

Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into less complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into less complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts ~ the propositions of science ~ are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivist associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

Semantics (Greek semantikos, “significant”), is the study of the meaning of linguistic signs ~ that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance ~ that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs—what they stand for ~ with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general ~ semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism (see Analytic and Linguistic Philosophy).

The German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition—a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to—the moon—is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian ~ British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary ~ language philosophy has evolved the current theory of speech ~ act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs ~ that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth ~ based semantics and speech ~ act semantics. Some critics of speech ~ act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic ~ that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone—independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject ~ predicate analysis. For languages without clear ~ cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines ~ with one or more arguments (also signs) =often nominal arguments (noun phrases) or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”

Whether using subject ~ predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one ~ to ~ one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role ~ to identify a person ~ in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs ~ usually single words as vocabulary items called lexemes ~ in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi ~ Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational ~ generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound ~ system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless—semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech ~ that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well ~ formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible ~ in a syntactically based theory ~ for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general ~ semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in polarity.

Quine (1908 ~ 2000), an American philosopher, known for his work in mathematical logic and his contributions to a pragmatic theory of knowledge. Born in Akron, Ohio, Quine was educated at Oberlin College and at Harvard University, where he became a member of the faculty in 1936.

Quine became noted for his claim that the way one uses language determines what kinds of things one is committed to saying exist. Moreover, the justification for speaking one way rather than another, just as the justification for adopting one conceptual system rather than another, was a thoroughly pragmatic one for Quine. He also became known for his criticism of the traditional distinction between synthetic statements (empirical, or factual, propositions) and analytic statements (necessarily true propositions). Quine made major contributions in set theory, a branch of mathematical logic concerned with the relationship between classes.

Pragmatism is a philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know ~ how and practicality and an equally American distrust of abstract theories and ideologies.

American psychologist and philosopher William James helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truths is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

The Association for International Conciliation first published William James’s pacifist statement, “The Moral Equivalent of War,” in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism ~ a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long ~ standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represents the standards of the time.

All theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning ~ in particular, the meaning of concepts used in science. The meaning of the concept “brittle,” for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivist emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life ~ morality and religious belief, for example ~ are leaps of faith. As such, they depend upon what he called “the will to believe” and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist ~ someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever ~ evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world ~ view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists ~ Pierce, James, and Dewey ~ as an alternative to Rorty’s interpretation of the tradition.

In an ever changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

The axiom, in logic and mathematics, a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: “No sentence can be true and false at the same time” (the principle of contradiction); “If equals are added to equals, the sums are equal”; “The whole is greater than any of its parts.” Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self ~ evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.

The terms axiom and postulate are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.

Semantics is the study of the meaning of linguistic signs ~ that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance ~ that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs ~ what they stand for ~ with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general ~ semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism (see Analytic and Linguistic Philosophy).

German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen ~ plume ~ of the colour red ~ rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition ~ a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to ~ the moon ~ is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian ~ British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary ~ language philosophy has evolved the current theory of speech ~ act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs ~ that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth ~ based semantics and speech ~ act semantics. Some critics of speech ~ act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic ~ that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone ~ independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject ~ predicate analysis. For languages without clear ~ cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs) ~ often nominal arguments (noun phrases) ~ or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”

Whether using subject ~ predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one ~ to ~ one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role ~ to identify a person ~ in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs ~ usually single words as vocabulary items called lexemes—in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi ~ Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational ~ generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound ~ system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless ~ semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech ~ that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well ~ formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible ~ in a syntactically based theory—for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general ~ semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in popularity.

Analytic and Linguistic Philosophy of the 20th ~ century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.

A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.

By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th ~ century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English ~ speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical consideration is based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.

Austrian ~ born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico ~ philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico ~ Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science ~ are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.

The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian ~ born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Adaptational contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered. Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.

The German philosopher Martin Heidegger greatly influenced the modern philosophy movements of phenomenology and existentialism. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger was born in Messkirch, Baden. He studied Roman Catholic theology and then philosophy at the University of Freiburg, where he was an assistant to Edmund Husserl, the founder of phenomenology. Heidegger began teaching at Freiburg in 1915. From 1923 to 1928 he taught at Marburg University. He then returned to Freiburg in 1928, inheriting Husserl's position as professor of philosophy. Because of his public support of Adolf Hitler and the Nazi Party in 1933 and 1934, Heidegger's professional activities were restricted in 1945, and controversy surrounded his university standing until his retirement in 1959.

German philosopher Martin Heidegger was instrumental in the development of the 20th ~ century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean ~ Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).

open sidebar

Besides Husserl, Heidegger was especially influenced by the pre ~ Socratics (see Greek Philosophy; Philosophy), by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. In his most important and influential work, Sein und Zeit (Being and Time, 1927), Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of “being” human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural as well as natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.

The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.

After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning—a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.

Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean ~ Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English ~ speaking countries worldwide.

German philosopher Martin Heidegger was instrumental in the development of the 20th ~ century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean ~ Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).

Danish religious philosopher Søren Kierkegaard rejected the all ~ encompassing, analytical philosophical systems of such 19th ~ century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.

One of the most controversial works of 19th ~ century philosophy, Thus Spake Zarathustra (1883 ~ 1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life ~ affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo ~ Christian moral tradition in favour of a heroic pagan ideal.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th ~ century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.

Twentieth ~ century French intellectual Jean ~ Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th ~ century theology. The 20th ~ century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th ~ century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879 ~ 1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.

Twentieth ~ century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th ~ century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self ~ destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879 ~ 80), “We must love life more than the meaning of it.”

The opening lines of Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864) ~ “I am a sick man. ~ I am a spiteful man”—are among the most famous in 19th ~ century literature. Published five years after his release from prison and involuntary military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.

open sidebar

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

Maurice Merleau~Ponty had been a existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at the Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment (1942; trans. 1963), a critique of behaviourism. His major work, Phenomenology of Perception (1945; trans. 1962), is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism ~ the view that the working of the human mind can be understood in terms of rules or programs. It is also a telling critique of the existentialism of his contemporary, Jean ~ Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our embodiment.

With Sartre and Simone de Beauvoir, Merleau ~ Ponty founded an influential postwar French journal, Les Temps Modernes. His brilliant and timely essays on art, film, politics, psychology, and religion, first published in this journal, were later collected in Sense and Nonsense (1948; trans. 1964). At the time of his death, he was working on a book, The Visible and the Invisible (1964; trans. 1968), arguing that the whole perceptual world has the sort of organic unity he had earlier attributed to the body and to works of art.

Semantics (Greek semantikos, “significant”), the study of the meaning of linguistic signs— that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance—that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs—what they stand for ~ with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general ~ semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism (see Analytic and Linguistic Philosophy).

The German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to specify of a definite pen ~ plume ~ of the Collor red ~ rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition—a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to ~ the moon ~ is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian ~ British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary ~ language philosophy has evolved the current theory of speech ~ act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs ~ that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth ~ based semantics and speech ~ act semantics. Some critics of speech ~ act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic—that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone—independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject ~ predicate analysis. For languages without clear ~ cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs) ~ often nominal arguments (noun phrases) or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”

Whether using subject ~ predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one ~ to ~ one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role ~ to identify a person ~ in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs—usually single words as vocabulary items called lexemes—in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi ~ Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational ~ generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound ~ system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless—semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech—that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well ~ formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this course, it signifies the possibility ~ in a syntactically based theory ~ for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general ~ semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in popularity.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th ~ century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English ~ speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.

Austrian ~ born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico ~ philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.

Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico ~ Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.

The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.

The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian ~ born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.

Linguistics, the scientific study of language, encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner’s first language and about the language being acquired.

Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.

Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyze Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathers a body of data and analyze it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as berry in blueberry; or prefixes (pre ~ in preview) and suffixes ( ~ ness in openness).

The linguist’s next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence “She pushed the bush,” the morpheme she, a pronoun, is the subject; push, a transitive verb, is the verb; the, a definite article, is the determiner; and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provide descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.

Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latin were related to one another and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word bhratar for “brother” resembles the Latin word frater, the Greek word phrater, (and the English word brother).

Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo ~ European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.

Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verb go changes to go and gone to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in “go store tomorrow”). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might express when something was done, by whom, to whom, and in what manner.

Some comparative linguists reconstruct hypothetical ancestral languages known as proto ~ languages, which they use to demonstrate relatedness among contemporary languages. A proto ~ language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of a people.

Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture ~ free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.

By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.

The field of linguistics both borrows from and lends its own theories and methods to other disciplines. The many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross ~ disciplinary fields.

Sociolinguistics is the study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as “fourth floor” can indicate the person’s social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing the /r/. Sometimes they even overcorrect their speech, pronouncing an /r/ where those whom they wish to copy may not.

Some sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of sociolinguistics is to understand communicative competence—what people need to know to use the appropriate language for a given social setting.

Psycholinguistics merges the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children’s language acquisition and of second ~ language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).

Computational linguistics involves the use of computers to compile linguistic data, analyze languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyze the relatedness and the structure of languages and to look for patterns and similarities. Computers also aid in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in machine translation systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.

Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.

Discussions of the massive extinction of human languages that has taken place worldwide over the last few centuries, and the enormous consequences that such loss has had on the richness of the world's cultural heritage.

Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyze culture. Anthropological linguists examine the relationship between a culture and its language, the way cultures and languages have changed over time, and how different cultures and languages are related to one another. For example, the present English use of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.

Philosophical linguistics examines the philosophy of language. Philosophers of language search for the grammatical principles and tendencies that all human languages share. Among the concerns of linguistic philosophers is the range of possible word order combinations throughout the world. One finding is that 95 percent of the world’s languages use a subject ~ verb ~ object (SVO) order as English does (“She pushed the bush.”). Only 5 percent use a subject ~ object ~ verb (SOV) order or verb ~ subject ~ object (VSO) order.

Neurolinguistics is the study of how language is processed and represented in the brain. Neurolinguists seek to identify the parts of the brain involved with the production and understanding of language and to determine where the components of language (phonemes, morphemes, and structure or syntax) are stored. In doing so, they make use of techniques for analysing the structure of the brain and the effects of brain damage on language.

Speculation about language goes back thousands of years. Ancient Greek philosophers speculated on the origins of language and the relationship between objects and their names. They also discussed the rules that govern language, or grammar, and by the 3rd century Bc they had begun grouping words into parts of speech and devising names for different forms of verbs and nouns.

In India religion provided the motivation for the study of language nearly 2500 years ago. Hindu priests noted that the language they spoke had changed since the compilation of their ancient sacred texts, the Vedas, starting about 1000 Bc. They believed that for certain religious ceremonies based upon the Vedas to succeed, they needed to reproduce the language of the Vedas precisely. Panini, an Indian grammarian who lived about 400 Bc, produced the earliest work describing the rules of Sanskrit, the ancient language of India.

The Romans used Greek grammars as models for their own, adding commentary on Latin style and usage. Statesman and orator Marcus Tullius Cicero wrote on rhetoric and style in the 1st century Bc. Later grammarians Aelius Donatus (4th century ad) and Priscian (6th century ad) produced detailed Latin grammars. Roman works served as textbooks and standards for the study of language for more than 1000 years.

It was not until the end of the 18th century that language was researched and studied in a scientific way. During the 17th and 18th centuries, modern languages, such as French and English, replaced Latin as the means of universal communication in the West. This occurrence, along with developments in printing, meant that many more texts became available. At about this time, the study of phonetics, or the sounds of a language, began. Such investigations led to comparisons of sounds in different languages; in the late 18th century the observation of correspondences among Sanskrit, Latin, and Greek gave birth to the field of Indo ~ European linguistics.

During the 19th century, European linguists focussed on philology, or the historical analysis and comparison of languages. They studied written texts and looked for changes over time or for relationships between one language and another.

American linguist, writer, teacher, and political activist Noam Chomsky is considered the founder of transformational ~ generative linguistic analysis, which revolutionized the field of linguistics. This system of linguistics treats grammar as a theory of language—that is, Chomsky believes that in addition to the rules of grammar specific to individual languages, there are universal rules common to all languages that indicate that the ability to form and understand language is innate to all human beings. Chomsky also is well known for his political activism ~ he opposed United States involvement in Vietnam in the 1960s and 1970s and has written various books and articles and delivered many lectures in an attempt to educate and empower people on various political and social issues.

In the early 20th century, linguistics expanded to include the study of unwritten languages. In the United States linguists and anthropologists began to study the rapidly disappearing spoken languages of Native North Americans. Because many of these languages were unwritten, researchers could not use historical analysis in their studies. In their pioneering research on these languages, anthropologists Franz Boas and Edward Sapir developed the techniques of descriptive linguistics and theorized on the ways in which language shapes our perceptions of the world.

An important outgrowth of descriptive linguistics is a theory known as structuralism, which assumes that language is a system with a highly organized structure. Structuralism began with publication of the work of Swiss linguist Ferdinand de Saussure in Cours de linguistique générale (1916; Course in General Linguistics, 1959). This work, compiled by Saussure’s students after his death, is considered the foundation of the modern field of linguistics. Saussure made a distinction between actual speech, and spoken language, and the knowledge underlying speech that speakers share about what is grammatical. Speech, he said, represents instances of grammar, and the linguist’s task is to find the underlying rules of a particular language from examples found in speech. To the structuralist, grammar is a set of relationships that account for speech, rather than a set of instances of speech, as it is to the descriptivists.

Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.

Saussure’s ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivists tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompei and Bombay the same way.

As linguistics developed in the 20th century, the notion became prevalent that language is more than speech ~ specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.

The 1957 publication of Syntactic Structures by American linguist Noam Chomsky initiated what many view as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language—the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational ~ generative grammar, referring to the transformations (or rules) that generate (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky’s theories.

At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.

The orientation toward the scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker ~ hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance—the way people use language—to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?

Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short ~ term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

The mostoverwhelming question in neurobiology today is the relation between the mind and the brain. Everyone agrees that what we know as mind is closely related to certain aspects of the behaviour of the brain, not to the heart, as Aristotle thought. Its most mysterious aspect is consciousness or awareness, which can take many forms, from the experience of pain to self ~ consciousness. In the past the mind (or soul) was often regarded, as it was by Descartes, as something immaterial, separate from the brain but interacting with it in some way. A few neuroscientists, such as Sir John Eccles, still assert that the soul is distinct from the body. But most neuroscientists now believe that all aspects of mind, including its most puzzling attribute ~ consciousness or awareness are likely to be explainable in a more materialistic way as the behaviour of large sets of interacting neurons. As William James, the father of American psychology, said a century ago, consciousness is not a thing but a process.

Exactly what the process is, however, has yet to be discovered. For many years after James penned The Principles of Psychology, consciousness was a taboo concept in American psychology because of the dominance of the behaviorist movement. With the advent of cognitive science in the mid ~ 1950s, it became possible once more for psychologists to consider mental processes as opposed to merely observing behaviour. In spite of these changes, until recently most cognitive scientists ignored consciousness, as did almost all neuroscientists. The problem was felt to be either purely "philosophical" or too elusive to study experimentally. It would not have been easy for a neuroscientists to get a grant just to study consciousness.

In our opinion, such timidity is ridiculous, so a few years ago we began to think about how best to attack the problem scientifically. How to explain mental events as caused by the firing of large sets of neurons? Although there are those who believe such an approach is hopeless, we feel it is not productive to worry too much over aspects of the problem that cannot be solved scientifically or, more precisely, cannot be solved solely by using existing scientific ideas. Radically new concepts may indeed be needed—recall the modifications of scientific thinking forced on us by quantum mechanics. The only sensible approach is to press the experimental attack until we are confronted with dilemmas that call for new ways of thinking.

There are many possible approaches to the problem of consciousness. Some psychologists feel that any satisfactory theory should try to explain as many aspects of consciousness as possible, including emotion, imagination, dreams, mystical experiences and so on. Although such an all ~ embracing theory will be necessary in the long run, we thought it wiser to begin with the particular aspect of consciousness that is likely to yield most easily. What this aspect may be is a matter of personal judgment. We selected the mammalian visual system because humans are very visual animals and because so much experimental and theoretical work has already been done on it.

It is not easy to grasp exactly what we need to explain, and it will take many careful experiments before visual consciousness can be described scientifically. We did not attempt to define consciousness itself because of the dangers of premature definition. (If this seems like a copout, try defining the word "gene" ~ you will not find it easy.) Yet the experimental evidence that already exists provides enough of a glimpse of the nature of visual consciousness to guide research. In this article, we will attempt to show how this evidence opens the way to attack this profound and intriguing problem.

Visual theorists agree that the problem of visual consciousness is ill posed. The mathematical term "ill posed" means that additional constraints are needed to solve the problem. Although the main function of the visual system is to perceive objects and events in the world around us, the information available to our eyes is not sufficient by itself to provide the brain with its unique interpretation of the visual world. The brain must use past experience (either its own or that of our distant ancestors, which is embedded in our genes) to help interpret the information coming into our eyes. An example would be the derivation of the three ~ dimensional representation of the world from the two ~ dimensional signals falling onto the retinas of our two eyes or even onto one of them.

Visual theorists also would agree that seeing is a constructive process, one in which the brain has to carry out complex activities (sometimes called computations) in order to decide which interpretation to adopt of the ambiguous visual input. "Computation" implies that the brain acts to form a symbolic representation of the visual world, with a mapping (in the mathematical sense) of certain aspects of that world onto elements in the brain.

Ray Jackendoff of Brandeis University postulates, as do most cognitive scientists, that the computations carried out by the brain are largely unconscious and that what we become aware of is the result of these computations. But while the customary view is that this awareness occurs at the highest levels of the computational system, Jackendoff has proposed an intermediate ~ level theory of consciousness.

What we see, Jackendoff suggests, relates to a representation of surfaces that are directly visible to us, together with their outline, orientation, Collor, texture and movement. (This idea has similarities to what the late David C. Marr of the Massachusetts Institute of Technology called a "2 ½ ~ dimensional sketch." It is more than a two ~ dimensional sketch because it conveys the orientation of the visible surfaces. It is less than three ~ dimensional because depth information is not explicitly represented.) In the next stage this sketch is processed by the brain to produce a three ~ dimensional representation. Jackendoff argues that we are not visually aware of this three ~ dimensional representation.

An example may make this process clearer. If you look at a person whose back is turned to you, you can see the back of the head but not the face. Nevertheless, your brain infers that the person has a face. We can deduce as much because if that person turned around and had no face, you would be very surprised.

The viewer ~ entered representation that corresponds to the visible back of the head is what you are vividly aware of. What your brain infers about the front would come from some kind of three ~ dimensional representation. This does not mean that information flows only from the surface representation to the three ~ dimensional one; it almost certainly flows in both directions. When you imagine the front of the face, what you are aware of is a surface representation generated by information from the three ~ dimensional model.

It is important to distinguish between an explicit and an implicit representation. An explicit representation is something that is symbolized without further processing. An implicit representation contains the same information but requires further processing to make it explicit. The pattern of coloured dots on a television screen, for example, contains an implicit representation of objects (say, a person's face), but only the dots and their locations are explicit. When you see a face on the screen, there must be neurons in your brain whose firing, in some sense, symbolizes that face.

We call this pattern of firing neurons an active representation. A latent representation of a face must also be stored in the brain, probably as a special pattern of synaptic connections between neurons. For example, you probably have a representation of the Statue of Liberty in your brain, a representation that is usually inactive. If you do think about the Statue, the representation becomes active, with the relevant neurons firing away.

An object, incidentally, may be represented in more than one way—as a visual image, as a set of words and their related sounds, or even as a touch or a smell. These different representations are likely to interact with one another. The representation is likely to be distributed over many neurons, both locally and more globally. Such a representation may not be as simple and straightforward as uncritical introspection might indicate. There is suggestive evidence, partly from studying how neurons fire in various parts of a monkey's brain and partly from examining the effects of certain types of brain damage in humans, that different aspects of a face—and of the implications of a face—may be represented in different parts of the brain.

First, there is the representation of a face as a face: two eyes, a nose, a mouth and so on. The neurons involved are usually not too fussy about the exact size or position of this face in the visual field, nor are they very sensitive to small changes in its orientation. In monkeys, there are neurons that respond best when the face is turning in a particular direction, while others seem to be more concerned with the direction in which the eyes are gazing.

Then there are representations of the parts of a face, as separate from those for the face as a whole. Further, the implications of seeing a face, such as that person's sex, the facial expression, the familiarity or unfamiliarity of the face, and in particular whose face it is, may each be correlated with neurons firing in other places.

What we are aware of at any moment, in one sense or another, is not a simple matter. We have suggested that there may be a very transient form of fleeting awareness that represents only rather simple features and does not require an attentional mechanism. From this brief awareness the brain constructs a viewer ~ entered representation—what we see vividly and clearly—that does require attention. This in turn probably leads to three ~ dimensional object representations and thence to more cognitive ones.

Representations corresponding to vivid consciousness are likely to have special properties. William James thought that consciousness involved both attention and short ~ term memory. Most psychologists today would agree with this view. Jackendoff writes that consciousness is "enriched" by attention, implying that whereas attention may not be essential for certain limited types of consciousness, it is necessary for full consciousness. Yet it is not clear exactly which forms of memory are involved. Is long ~ term memory needed? Some forms of acquired knowledge are so embedded in the machinery of neural processing that they are almost certainly used in becoming aware of something. On the other hand, there is evidence from studies of brain ~ damaged patients that the ability to lay down new long ~ term episodic memories is not essential for consciousness to be experienced.

It is difficult to imagine that anyone could be conscious if he or she had no memory whatsoever of what had just happened, even an extremely short one. Visual psychologists talk of iconic memory, which lasts for a fraction of a second, and working memory (such as that used to remember a new telephone number) that lasts for only a few seconds unless it is rehearsed. It is not clear whether both of these are essential for consciousness. In any case, the division of short ~ term memory into these two categories may be too crude.

If these complex processes of visual awareness are localized in parts of the brain, which processes are likely to be where? Many regions of the brain may be involved, but it is almost certain that the cerebral neocortex plays a dominant role. Visual information from the retina reaches the neocortex mainly by way of a part of the thalamus (the lateral geniculate nucleus); another significant visual pathway from the retina is to the superior colliculus, at the top of the brain stem.

The cortex in humans consists of two intricately folded sheets of nerve tissue, one on each side of the head. These sheets are connected by a large tract of about half a billion axons called the corpus callosum. It is well known that if the corpus callosum is cut, as is done for certain cases of intractable epilepsy, one side of the brain is not aware of what the other side is seeing. In particular, the left side of the brain (in a right ~ handed person) appears not to be aware of visual information received exclusively by the right side. This shows that none of the information required for visual awareness can reach the other side of the brain by travelling down to the brain stem and, from there, back up. In a normal person, such information can get to the other side only by using the axons in the corpus callosum.

A different part of the brain—the hippocampal system—is involved in one ~ shot, or episodic, memories that, over weeks and months, it passes on to the neocortex. This system is so placed that it receives inputs from, and projects to, many parts of the brain. Thus, one might suspect that the hippocampal system is the essential seat of consciousness. This is not the case: evidence from studies of patients with damaged brains shows that this system is not essential for visual awareness, although naturally a patient lacking one is severely handicapped in everyday life because he cannot remember anything that took place more than a minute or so in the past.

In broad terms, the neocortex of alert animals probably acts in two ways. By building on crude and somewhat redundant wiring, produced by our genes and by embryonic processes, the neocortex draws on visual and other experience to slowly "rewire" itself to create categories (or "features") it can respond to. A new category is not fully created in the neocortex after exposure to only one example of it, although some small modifications of the neural connections may be made.

The second function of the neocortex (at least of the visual part of it) is to respond extremely rapidly to incoming signals. To do so, it uses the categories it has learned and tries to find the combinations of active neurons that, on the basis of its past experience, are most likely to represent the relevant objects and events in the visual world at that moment. The formation of such coalitions of active neurons may also be influenced by biases coming from other parts of the brain: for example, signals telling it what best to attend to or high ~ level expectations about the nature of the stimulus.

Consciousness, as James noted, is always changing. These rapidly formed coalitions occur at different levels and interact to form even broader coalitions. They are transient, lasting usually for only a fraction of a second. Because coalitions in the visual system are the basis of what we see, evolution has seen to it that they form as fast as possible; otherwise, no animal could survive. The brain is handicapped in forming neuronal coalitions rapidly because, by computer standards, neurons act very slowly. The brain compensates for this relative slowness partly by using very many neurons, simultaneously and in parallel, and partly by arranging the system in a roughly hierarchical manner.

If visual awareness at any moment corresponds to sets of neurons firing, then the obvious question is: Where are these neurons located in the brain, and in what way are they firing? Visual awareness is highly unlikely to occupy all the neurons in the neocortex that are firing above their background rate at a particular moment. We would expect that, theoretically, at least some of these neurons would be involved in doing computations to arrive at the best coalitions ~ whereas others would express the results of these computations, in other words, what we see.

fortunately, some experimental evidence can be found to back up this theoretical conclusion. A phenomenon called binocular rivalry may help identify the neurons whose firing symbolizes awareness. This phenomenon can be seen in dramatic form in an exhibit prepared by Sally Duensing and Bob Miller at the Exploratorium in San Francisco.

Inocular rivalry occurs when each eye has a different visual input relating to the same part of the visual field. The early visual system on the left side of the brain receives an input from both eyes but sees only the part of the visual field to the right of the fixation point. The converse is true for the right side. If these two conflicting inputs are rivalrous, one sees not the two inputs superimposed but first one input, then the other, and so on in alternation.

This exhibit, called "The Cheshire Cat," viewers put their heads in a fixed place and are told to keep the gaze fixed. By means of a suitably placed mirror, one of the eyes can look at another person's face, directly in front, while the other eye sees a blank white screen to the side. If the viewer waves a hand in front of this plain screen at the same location in his or her visual field occupied by the face, the face is wiped out. The movement of the hand, being visually very salient, has captured the brain's attention. Without attention the face cannot be seen. If the viewer moves the eyes, the face reappears.

n some cases, only part of the face disappears. Sometimes, for example, one eye, or both eyes, will remain. If the viewer looks at the smile on the person's face, the face may disappear, leaving only the smile. For this reason, the effect has been called the Cheshire Cat effect, after the cat in Lewis Carroll's Alice's Adventures in Wonderland.

although it is very difficult to record activity in individual neurons in a human brain, such studies can be done in monkeys. A simple example of binocular rivalry has been studied in a monkey by Nikos K. Logothetis and Jeffrey D. Schall, both then at M.I.T. They trained a macaque to keep its eyes still and to signal whether it is seeing upward or downward movement of a horizontal grating. To produce rivalry, upward movement is projected into one of the monkey's eyes and downward movement into the other, so that the two images overlap in the visual field. The monkey signals that it sees up and down movements alternatively, just as humans would. Even though the motion stimulus coming into the monkey's eyes is always the same, the monkey's percept changes every second or so.

Ortical area MT (which some researchers prefer to label V5) is an area mainly concerned with movement. What do the neurons in MT do when the monkey's percept is sometimes up and sometimes down? (The researchers studied only the monkey's first response.) The simplified answer—the actual data are rather more messy—is that whereas the firing of some of the neurons correlates with the changes in the percept, for others the average firing rate is relatively unchanged and independent of which direction of movement the monkey is seeing at that moment. Thus, it is unlikely that the firing of all the neurons in the visual neocortex at one particular moment corresponds to the monkey's visual awareness. Exactly which neurons do correspond to awareness remains to be discovered.

e have postulated that when we clearly see something, there must be neurons actively firing that stand for what we see. This might be called the activity principle. Here, too, there is some experimental evidence. One example is the firing of neurons in a specific cortical visual area in response to illusory contours. Another and perhaps more striking case is the filling in of the blind spot. The blind spot in each eye is caused by the lack of photoreceptors in the area of the retina where the optic nerve leaves the retina and projects to the brain. Its location is about 15 degrees from the fovea (the visual centre of the eye). Yet if you close one eye, you do not see a hole in your visual field.

philosopher Daniel C. Dennett of Tufts University is unusual among philosophers in that he is interested both in psychology and in the brain. This interest is much to be welcomed. In a recent book, Consciousness Explained, he has argued that it is wrong to talk about filling in. He concludes, correctly, that "an absence of information is not the same as information about an absence." From this general principle he argues that the brain does not fill in the blind spot but rather ignores it.

Dennett's argument by itself, however, does not establish that filling in does not occur; it only suggests that it might not. Dennett also states that "your brain has no machinery for [filling in] at this location." This statement is incorrect. The primary visual cortex lacks a direct input from one eye, but normal "machinery" is there to deal with the input from the other eye. Ricardo Gattass and his colleagues at the Federal University of Rio de Janeiro have shown that in the macaque some of the neurons in the blind ~ spot area of the primary visual cortex do respond to input from both eyes, probably assisted by inputs from other parts of the cortex. Moreover, in the case of simple filling in, some of the neurons in that region respond as if they were actively filling in.

Thus, Dennett's claim about blind spots is incorrect. In addition, psychological experiments by Vilayanur S. Ramachandran [see "Blind Spots," Scientific American, May 1992] have shown that what is filled in can be quite complex depending on the overall context of the visual scene. How, he argues, can your brain be ignoring something that is in fact commanding attention?

Filling in, therefore, is not to be dismissed as nonexistent or unusual. It probably represents a basic interpolation process that can occur at many levels in the neocortex. It is, incidentally, a good example of what is meant by a constructive process.

How can we discover the neurons whose firing symbolizes a particular percept? William T. Newsome and his colleagues at Stanford University have done a series of brilliant experiments on neurons in cortical area MT of the macaque's brain. By studying a neuron in area MT, we may discover that it responds best to very specific visual features having to do with motion. A neuron, for instance, might fire strongly in response to the movement of a bar in a particular place in the visual field, but only when the bar is oriented at a certain angle, moving in one of the two directions perpendicular to its length within a certain range of speed.

It is technically difficult to excite just a single neuron, but it is known that neurons that respond to roughly the same position, orientation and direction of movement of a bar tend to be located near one another in the cortical sheet. The experimenters taught the monkey a simple task in movement discrimination using a mixture of dots, some moving randomly, the rest all in one direction. They showed that electrical stimulation of a small region in the right place in cortical area MT would bias the monkey's motion discrimination, almost always in the expected direction.

Thus, the stimulation of these neurons can influence the monkey's behaviour and probably its visual percept. Such experiments do not, however, show decisively that the firing of such neurons is the exact neural correlate of the percept. The correlate could be only a subset of the neurons being activated. Or perhaps the real correlate is the firing of neurons in another part of the visual hierarchy that are strongly influenced by the neurons activated in area MT.

These same reservations apply also to cases of binocular rivalry. Clearly, the problem of finding the neurons whose firing symbolizes a particular percept is not going to be easy. It will take many careful experiments to track them down even for one kind of percept.

It seems obvious that the purpose of vivid visual awareness is to feed into the cortical areas concerned with the implications of what we see; from there the information shuttles on the one hand to the hippocampal system, to be encoded (temporarily) into long ~ term episodic memory, and on the other to the planning levels of the motor system. But is it possible to go from a visual input to a behavioural output without any relevant visual awareness?

That such a process can happen is demonstrated by the remarkable class of patients with "blindsight." These patients, all of whom have suffered damage to their visual cortex, can point with fair accuracy at visual targets or track them with their eyes while vigorously denying seeing anything. In fact, these patients are as surprised as their doctors by their abilities. The amount of information that "gets through," however, is limited: blindsight patients have some ability to respond to wavelength, orientation and motion, yet they cannot distinguish a triangle from a square.

It is naturally of great interest to know which neural pathways are being used in these patients. Investigators originally suspected that the pathway ran through the superior colliculus. Recent experiments suggest that a direct although weak connection may be involved between the lateral geniculate nucleus and other visual areas in the cortex. It is unclear whether an intact primary visual cortex region is essential for immediate visual awareness. Conceivably the visual signal in blindsight is so weak that the neural activity cannot produce awareness, although it remains strong enough to get through to the motor system.

Normal ~ seeing people regularly respond to visual signals without being fully aware of them. In automatic actions, such as swimming or driving a car, complex but stereotypical actions occur with little, if any, associated visual awareness. In other cases, the information conveyed is either very limited or very attenuated. Thus, while we can function without visual awareness, our behaviour without it is rather restricted.

Clearly, it takes a certain amount of time to experience a conscious percept. It is difficult to determine just how much time is needed for an episode of visual awareness, but one aspect of the problem that can be demonstrated experimentally is that signals received close together in time are treated by the brain as simultaneous.

A disk of red light is flashed for, say, 20 milliseconds, followed immediately by a 20 ~ millisecond flash of green light in the same place. The subject reports that he did not see a red light followed by a green light. Instead he saw a yellow light, just as he would have if the red and the green light had been flashed simultaneously. Yet the subject could not have experienced yellow until after the information from the green flash had been processed and integrated with the preceding red one.

Experiments of this type led psychologist Robert Efron, now at the University of California at Davis, to conclude that the processing period for perception is about 60 to 70 milliseconds. Similar periods are found in experiments with tones in the auditory system. It is always possible, however, that the processing times may be different in higher parts of the visual hierarchy and in other parts of the brain. Processing is also more rapid in trained, compared with naive, observers.

Because it appears to be involved in some forms of visual awareness, it would help if we could discover the neural basis of attention. Eye movement is a form of attention, since the area of the visual field in which we see with high resolution is remarkably small, roughly the area of the thumbnail at arm's length. Thus, we move our eyes to gaze directly at an object in order to see it more clearly. Our eyes usually move three or four times a second. Psychologists have shown, however, that there appears to be a faster form of attention that moves around, in some sense, when our eyes are stationary.

The exact psychological nature of this faster attentional mechanism is at present controversial. Several neuroscientists, however, including Robert Desimone and his colleagues at the National Institute of Mental Health, have shown that the rate of firing of certain neurons in the macaque's visual system depends on what the monkey is attending to in the visual field. Thus, attention is not solely a psychological concept; it also has neural correlates that can be observed. A number of researchers have found that the pulvinar, a region of the thalamus, appears to be involved in visual attention. We would like to believe that the thalamus deserves to be called "the organ of attention," but this status has yet to be established.

The major problem is to find what activity in the brain corresponds directly to visual awareness. It has been speculated that each cortical area produces awareness of only those visual features that are "columnar," or arranged in the stack or column of neurons perpendicular to the cortical surface. Thus, the primary visual cortex could code for orientation and area MT for motion. So far experimentalists have not found one particular region in the brain where all the information needed for visual awareness appears to come together. Dennett has dubbed such a hypothetical place "The Cartesian Theatre." He argues on theoretical grounds that it does not exist.

Awareness seems to be distributed not just on a local scale, but more widely over the neocortex. Vivid visual awareness is unlikely to be distributed over every cortical area because some areas show no response to visual signals. Awareness might, for example, be associated with only those areas that connect back directly to the primary visual cortex or alternatively with those areas that project into one another's layer 4. (The latter areas are always at the same level in the visual hierarchy.)

The key issue, then, is how the brain forms its global representations from visual signals. If attention is indeed crucial for visual awareness, the brain could form representations by attending to just one object at a time, rapidly moving from one object to the next. For example, the neurons representing all the different aspects of the attended object could all fire together very rapidly for a short period, possibly in rapid bursts.

This fast, simultaneous firing might not only excite those neurons that symbolized the implications of that object but also temporarily strengthen the relevant synapses so that this particular pattern of firing could be quickly recalled ~ a form of short ~ term memory. If only one representation needs to be held in short ~ term memory, as in remembering a single task, the neurons involved may continue to fire for a period.

A problem arises if it is necessary to be aware of more than one object at exactly the same time. If all the attributes of two or more objects were represented by neurons firing rapidly, their attributes might be confused. The Collor of one might become attached to the shape of another. This happens sometimes in very brief presentations.

Some time ago Christoph von der Malsburg, now at the Ruhr ~ Universität Bochum, suggested that this difficulty would be circumvented if the neurons associated with any one object all fired in synchrony (that is, if their times of firing were correlated) but out of synchrony with those representing other objects. Recently two groups in Germany reported that there does appear to be correlated firing between neurons in the visual cortex of the cat, often in a rhythmic manner, with a frequency in the 35 ~ to 75 ~ hertz range, sometimes called 40 ~ hertz, or g, oscillation.

Von der Malsburg's proposal prompted us to suggest that this rhythmic and synchronized firing might be the neural correlate of awareness and that it might serve to bind together activity concerning the same object in different cortical areas. The matter is still undecided, but at present the fragmentary experimental evidence does rather little to support such an idea. Another possibility is that the 40 ~ hertz oscillations may help distinguish figure from ground or assist the mechanism of attention.

Are there some particular types of neurons, distributed over the visual neocortex, whose firing directly symbolizes the content of visual awareness? One very simplistic hypothesis is that the activities in the upper layers of the cortex are largely unconscious ones, whereas the activities in the lower layers (layers 5 and 6) mostly correlate with consciousness. We have wondered whether the pyramidal neurons in layer 5 of the neocortex, especially the larger ones, might play this latter role.

These are the only cortical neurons that project right out of the cortical system (that is, not to the neocortex, the thalamus or the claustrum). If visual awareness represents the results of neural computations in the cortex, one might expect that what the cortex sends elsewhere would symbolize those results. Moreover, the neurons in layer 5 show a rather unusual propensity to fire in bursts. The idea that layer 5 neurons may directly symbolize visual awareness is attractive, but it still is too early to tell whether there is anything in it.

Visual awareness is clearly a difficult problem. More work is needed on the psychological and neural basis of both attention and very short ~ term memory. Studying the neurons when a percept changes, even though the visual input is constant, should be a powerful experimental paradigm. We need to construct neurobiological theories of visual awareness and test them using a combination of molecular, neurobiological and clinical imaging studies.

We believe that once we have mastered the secret of this simple form of awareness, we may be close to understanding a central mystery of human life: how the physical events occurring in our brains while we think and act in the world relate to our subjective sensations—that is, how the brain relates to the mind.

Postscript: There have been several relevant developments since this article was first published. It now seems likely that there are rapid "on ~ line" systems for stereotyped motor responses such as hand or eye movement. These systems are unconscious and lack memory. Conscious seeing, on the other hand, seems to be slower and more subject to visual illusions. The brain needs to form a conscious representation of the visual scene that it then can use for many different actions or thoughts. Exactly how all these pathways work and how they interact is far from clear.

There have been more experiments on the behaviour of neurons that respond to bistable visual percepts, such as binocular rivalry, but it is probably too early to draw firm conclusions from them about the exact neural correlates of visual consciousness. We have suggested on theoretical grounds based on the neuroanatomy of the macaque monkey that primates are not directly aware of what is happening in the primary visual cortex, even though most of the visual information flows through it. This hypothesis is supported by some experimental evidence, but it is still controversial

There is no simple, agreed ~ upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused.

Most of the philosophical discussions of consciousness arose from the mind ~ body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th ~ century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self ~ reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behaviour, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug ~ induced states. Much of the surge in sleep and dream research was directly fuelled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90 ~ minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness (see Dreaming; Sleep).

During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self ~ directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain ~ wave patterns to some extent, particularly the so ~ called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now been largely demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.

Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline; and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought ~ modifying properties, was initially explored for its so ~ called mind ~ expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.

Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short ~ term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

open sidebar

As the concept of a direct, simple linkage between environment and behaviour became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored (see Memory). An entirely new area called cognitive psychology has emerged that centres on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self ~ actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de ~ emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

All the same the first of Freud's innovations was his recognition of unconscious psychiatric processes that follow laws different from those that govern conscious experience. Under the influence of the unconscious, thoughts and feelings that belong together may be shifted or displaced out of context; two disparate ideas or images may be condensed into one; thoughts may be dramatized in the form of images rather than expressed as abstract concepts; and certain objects may be represented symbolically by images of other objects, although the resemblance between the symbol and the original object may be vague or farfetched. The laws of logic, indispensable for conscious thinking, do not apply to these unconscious mental productions.

Recognition of these modes of operation in unconscious mental processes made possible the understanding of such previously incomprehensible psychological phenomena as dreaming. Through analysis of unconscious processes, Freud saw dreams as serving to protect sleep against disturbing impulses arising from within and related to early life experiences. Thus, unacceptable impulses and thoughts, called the latent dream content, are transformed into a conscious, although no longer immediately comprehensible, experience called the manifest dream. Knowledge of these unconscious mechanisms permits the analyst to reverse the so ~ called dream work, that is, the process by which the latent dream is transformed into the manifest dream, and through dream interpretation, to recognize its underlying meaning.

A basic assumption of Freudian theory is that the unconscious conflicts involve instinctual impulses, or drives, that originate in childhood. As these unconscious conflicts are recognized by the patient through analysis, his or her adult mind can find solutions that were unattainable to the immature mind of the child. This depiction of the role of instinctual drives in human life is a unique feature of Freudian theory.

According to Freud's doctrine of infantile sexuality, adult sexuality is an end product of a complex process of development, beginning in childhood, involving a variety of body functions or areas (oral, anal, and genital zones), and corresponding to various stages in the relation of the child to adults, especially to parents. Of crucial importance is the so ~ called Oedipal period, occurring at about four to six years of age, because at this stage of development the child for the first time becomes capable of an emotional attachment to the parent of the opposite sex that is similar to the adult's relationship to a mate; the child simultaneously reacts as a rival to the parent of the same sex. Physical immaturity dooms the child's desires to frustration and his or her first step toward adulthood to failure. Intellectual immaturity further complicates the situation because it makes children afraid of their own fantasies. The extent to which the child overcomes these emotional upheavals and to which these attachments, fears, and fantasies continue to live on in the unconscious greatly influences later life, especially love relationships.

The conflicts occurring in the earlier developmental stages are no less significant as a formative influence, because these problems represent the earliest prototypes of such basic human situations as dependency on others and relationship to authority. Also basic in moulding the personality of the individual is the behaviour of the parents toward the child during these stages of development. The fact that the child reacts, not only to objective reality, but also to fantasy distortions of reality, however, greatly complicates even the best ~ intentioned educational efforts.

The effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies Triebe, which literally means “drives,” but which is often inaccurately translated as “instincts” to indicate their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be brought about is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. In order to fulfill its function of adaptation, or reality testing, the ego must be capable of enforcing the postponement of satisfaction of the instinctual impulses originating in the id. To defend itself against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishment of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only as a result of a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. The totality of these demands and prohibitions constitutes the major content of the third system, the superego, the function of which is to control the ego in accordance with the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can give rise to feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

A cornerstone of modern psychoanalytic theory and practice is the concept of anxiety, which institutes appropriate mechanisms of defence against certain danger situations. These danger situations, as described by Freud, are the fear of abandonment by or the loss of the loved one (the object), the risk of losing the object's love, the danger of retaliation and punishment, and, finally, the hazard of reproach by the superego. Thus, symptom formation, character and impulse disorders, and perversions, as well as sublimations, represent compromise formations—different forms of an adaptive integration that the ego tries to achieve through more or less successfully reconciling the different conflicting forces in the mind.

Various psychoanalytic schools have adopted other names for their doctrines to indicate deviations from Freudian theory.

The Swiss psychiatrist Carl Jung began his studies of human motivation in the early 1900s and created the school of psychoanalysis known as analytical psychology. A contemporary of Austrian psychoanalyst Sigmund Freud, Jung at first collaborated closely with Freud but eventually moved on to pursue his own theories, including the exploration of personality types. According to Jung, there are two basic personality types, extroverted and introverted, which alternate equally in the completely normal individual. Jung also believed that the unconscious mind is formed by the personal unconscious (the repressed feelings and thoughts developed during an individual’s life) and the collective unconscious (those feelings, thoughts, and memories shared by all humanity).

Carl Gustav Jung, one of the earliest pupils of Freud, eventually created a school that he preferred to call analytical psychology. Like Freud, Jung used the concept of the libido; however, to him it meant not only sexual drives, but a composite of all creative instincts and impulses and the entire motivating force of human conduct. According to his theories, the unconscious is composed of two parts; the personal unconscious, which contains the results of the individual's entire experience, and the collective unconscious, the reservoir of the experience of the human race. In the collective unconscious exist a number of primordial images, or archetypes, common to all individuals of a given country or historical era. Archetypes take the form of bits of intuitive knowledge or apprehension and normally exist only in the collective unconscious of the individual. When the conscious mind contains no images, however, as in sleep, or when the consciousness is caught off guard, the archetypes commence to function. Archetypes are primitive modes of thought and tend to personify natural processes in terms of such mythological concepts as good and evil spirits, fairies, and dragons. The mother and the father also serve as prominent archetypes.

An important concept in Jung's theory is the existence of two basically different types of personality, mental attitude, and function. When the libido and the individual's general interest are turned outward toward people and objects of the external world, he or she is said to be extroverted. When the reverse is true, and libido and interest are entered on the individual, he or she is said to be introverted. In a completely normal individual these two tendencies alternate, neither dominating, but usually the libido is directed mainly in one direction nor the other; as a result, two personality types are recognizable.

Jung rejected Freud's distinction between the ego and superego and recognized a portion of the personality, somewhat similar to the superego, that he called the persona. The persona consists of what a person appears to be to others, in contrast to what he or she actually is. The persona is the role the individual chooses to play in life, the total impression he or she wishes to make on the outside world.

The Austrian psychologist and psychiatrist Alfred Adler studied under Sigmund Freud, the founder of psychoanalysis, before developing his own theories about human behaviour. Adler’s best ~ known theories stress that individuals are mainly motivated by feelings of inferiority, which he called an inferiority complex.

Alfred Adler, another of Freud's pupils, differed from both Freud and Jung in stressing that the motivating force in human life is the sense of inferiority, which begins as soon as an infant is able to comprehend the existence of other people who are better able to care for themselves and cope with their environment. From the moment the feeling of inferiority is established, the child strives to overcome it. Because inferiority is intolerable, the compensatory mechanisms set up by the mind may get out of hand, resulting in self ~ entered neurotic attitudes, overcompensations, and a retreat from the real world and its problems.

Adler laid particular stress on inferiority feelings arising from what he regarded as the three most important relationships: those between the individual and work, friends, and loved ones. The avoidance of inferiority feelings in these relationships leads the individual to adopt a life goal that is often not realistic and is frequently expressed as an unreasoning will to power and dominance, leading to every type of antisocial behaviour from bullying and boasting to political tyranny. Adler believed that analysis can foster a sane and rational “community feeling” that is constructive rather than destructive.

Also the Austrian psychologist and psychotherapist Otto Rank worked with Sigmund Freud, the founder of psychoanalysis, before developing his own theories about mental and emotional disorders. Rank believed that an individual’s neurotic tendencies could be linked to the traumatic experience of birth.

Another student of Freud, Otto Rank, introduced a new theory of neurosis, attributing all neurotic disturbances to the primary trauma of birth. In his later writings he described individual development as a progression from complete dependence on the mother and family, to a physical independence coupled with intellectual dependence on society, and finally to complete intellectual and psychological emancipation. Rank also laid great importance on the will, defined as “a positive guiding organization and integration of self, which utilizes’ creatively as well as inhibits and controls the instinctual drives.”

The American psychoanalyst and social philosopher Erich Fromm stressed the importance of social and economic factors on human behaviour. His focus was a departure from traditional psychoanalysis, which emphasized the role of the subconscious. In this 1969 essay for Collier’s Year Book, Fromm presents various explanations for human violence. He argues that violence cannot be controlled by imposing stronger legal penalties, but rather by creating a more just society in which people connect with each other as humans and are able to control their own lives.

Later noteworthy modifications of psychoanalytic theory include those of the American psychoanalysts’ Erich Fromm, Karen Horney, and Harry Stack Sullivan. The theories of Fromm lay particular emphasis on the concept that society and the individual is not separate and opposing forces, that the nature of society is determined by its historic background, and that the needs and desires of individuals are largely formed by their society. As a result, Fromm believed, the fundamental problem of psychoanalysis and psychology is not to resolve conflicts between fixed and unchanging instinctive drives in the individual and the fixed demands and laws of society, but to bring about harmony and an understanding of the relationship between the individual and society. Fromm also stressed the importance to the individual of developing the ability to fully use his or her mental, emotional, and sensory powers.

Horney worked primarily in the field of therapy and the nature of neuroses, which she defined as of two types: Situation neuroses and character neuroses. Situation neuroses arise from the anxiety attendant on a single conflict, such for being faced with a difficult decision. Although they may paralyse the individual temporarily, making it impossible to think or act efficiently, such neuroses are not deeply rooted. Character neuroses are characterized by a basic anxiety and a basic hostility resulting from a lack of love and affection in childhood.

Sullivan believed that all development can be described exclusively in terms of interpersonal relations. Character types as well as neurotic symptoms are explained as results of the struggle against anxiety arising from the individual's relations with others and are a security system, maintained for the purpose of allaying anxiety.

An important school of thought is based on the teachings of the British psychoanalyst Melanie Klein. Because most of Klein's followers worked with her in England, this has come to be known as the English school. Its influence, nevertheless, is very strong throughout the European continent and in South America. Its principal theories were derived from observations made in the psychoanalysis of children. Klein posited the existence of complex unconscious fantasies in children under the age of six months. The principal source of anxiety arises from the threat to existence posed by the death instinct. Depending on how concrete representations of the destructive forces are dealt with in the unconscious fantasy life of the child, two basic early mental attitudes result that Klein characterized as a “depressive position” and a “paranoid position.” In the paranoid position, the ego's defence consists of projecting the dangerous internal object onto some external representative, which is treated as a genuine threat emanating from the external world. In the depressive position, the threatening object is introjected and treated in fantasy as concretely retained within the person. Depressive and hypochondriacal symptoms result. Although considerable doubt exists that such complex unconscious fantasies operate in the minds of infants, these observations have been of the utmost importance to the psychology of unconscious fantasies, paranoid delusions, and theory concerning early object relations.

Psychotherapy is an important form of treatment for many kinds of psychological problems. Two of the most common problems for which people seek help from a therapist are depression and persistent anxiety. People with depression may have low self ~ esteem, a sense of hopelessness about the future, and a lack of interest in people and activities once found pleasurable. People with anxiety disorders may feel anxious all the time or suffer from phobias, a fear of specific objects or situations. Psychotherapy, by itself or in combination with drug treatment, can often help people overcome or manage these problems.

People experiencing an emotional crisis due to marital problems, family disputes, problems at work, loneliness, or troubled social relationships may benefit from psychotherapy. Other problems often treated with psychotherapy include obsessive ~ compulsive disorder, personality disorders, alcoholism and other forms of drug dependence, problems stemming from child abuse, and behavioural problems, such as eating disorders and juvenile delinquency.

Mental health professionals do not rely on psychotherapy to treat schizophrenia, a severe mental illness. Drugs are used to treat this disorder. However, some psychotherapeutic techniques may help people with schizophrenia learn appropriate social skills and skills for managing anxiety. Another severe mental illness, bipolar disorder (popularly called manic depression), is treated with drugs or a combination of drugs and psychotherapy.

Before 1950 psychoanalysis was virtually the only form of psychotherapy available. In traditional psychoanalysis, patients met with a therapist several times a week. Patients would lie on a couch and talk about their childhood, their dreams, or whatever came to mind. The psychoanalyst interpreted these thoughts and helped patients resolve unconscious conflicts. This type of therapy often took years and was very expensive.

Over the next several decades the field of psychotherapy and counselling expanded enormously, both in the number of approaches available and in the number of people choosing to enter the profession. Variants of psychoanalysis emerged that focussed more on the patient’s current level of functioning and required less time in therapy. In the 1950s and 1960s therapists began using behavioural and cognitive therapies that focussed less on the inner world of the client and more on the client’s problem behaviours or thoughts.

As the number of approaches to therapy grew throughout the 1960s and 1970s, the practice of psychotherapy and counselling spread from hospitals and private psychiatric offices to new settings—elementary schools, high schools, colleges, prisons, mental health clinics, military bases, businesses, and churches and synagogues. With more opportunities for individuals to receive help for their problems, and with more affordable treatments, psychotherapy has become increasingly popular. Although a reliable count of the number of people who receive psychotherapy is difficult to obtain, researchers estimate that 3.5 percent of women and 2.5 percent of men in the United States receive psychotherapy in any given year.

The increased availability and use of psychotherapy has led to more positive attitudes toward mental health care among the general public. Before the 1960s, people often viewed the need for psychotherapy as a sign of personal weakness or a sign that the person was abnormal. Those who received therapy seldom told others about their treatment. Since then the stigma attached to psychotherapy has decreased significantly. It is now common for people to consider seeing a therapist for an emotional problem, and recipients of therapy are more willing to disclose their therapy to friends. Today psychotherapy is a topic of immense public interest. In the scientific community and in the media, people assess methods of therapy and debate which approaches are best for particular problems and disorders.

One of the strongest trends in psychotherapy in recent years has been the shift toward short ~ term treatment, or brief therapy. Rather than spending years in therapy, clients receive treatment over the course of several weeks or months. Brief therapies usually focus on the client’s specific problems and may make use of techniques from a variety of theoretical orientations. Brief approaches to therapy evolved in part from consumer dissatisfaction with the length, scope, and cost of psychoanalysis and similar approaches. With extensive publicity about short ~ term therapies, many consumers have come to expect faster treatment for mental health problems than in the past has further driven the movement toward shorter therapies. To provide mental health care at lower costs, managed ~ care firms, such as health maintenance organizations (HMOs), limit the number of therapy sessions that they will pay for during a year for each insured person. Typical managed ~ care firms allow up to 20 sessions per year, but some allow as few as 8 sessions per year. Case reviewers for the managed ~ care company decide how many sessions of therapy each person should receive. Usually a case reviewer will authorize only a small number of sessions at first. If the therapist and client wish to continue beyond this number, the therapist must get approval from the case reviewer for additional sessions. If the client wishes to continue after reaching the maximum, he or she must pay the full cost of therapy.

Other managed ~ care companies pay therapists a set fee to meet with a client for up to a specified maximum number of sessions depending on the nature of the problem, free of interference from case reviewers. For example, a managed ~ care firm may pay a therapist $200 to hold up to eight sessions with a person. If the client uses all eight sessions, the therapist normally loses money. But if treatment stops after two or three sessions, the therapist makes a profit. This relatively new system is controversial because it creates a financial incentive for the therapist to shorten the length of treatment.

Managed care has affected the practice of psychotherapy in other important ways. Rather than selecting a therapist based on personal referrals, people enrolled in managed ~ care plans must select from a list of therapists provided by their managed ~ care organization. Clients cannot be assured of complete confidentiality because therapists must provide case reviewers with treatment plans and details of progress. Increasingly, managed ~ care companies are reluctant to authorize more than several sessions of psychotherapy, favouring drug treatment instead.

Critics argue that managed ~ care companies have embraced a “quick fix” mentality that pushes short ~ term therapy even when long ~ term therapy may be more appropriate. Others note that managed care has brought greater accountability to the profession of psychotherapy, forcing therapists to justify the effectiveness of their treatment approach. In the late 1990s most Americans with health insurance were enrolled in plans with managed mental health care.

Psychotherapists and counsellors come principally from the fields of psychiatry, psychology, social work, and psychiatric nursing. Their training is quite different, considering that their actual therapeutic techniques may be quite similar.

Psychiatrists are physicians who specialize in the treatment of psychological disorders. They attend medical school for four years to earn an MD. (doctor of medicine) degree. Then they receive training in psychiatry during a residency of three or four years. They differ from other therapists in that they can prescribe medications, such as antidepressants and antianxiety drugs.

Clinical psychologists and counselling psychologists have a PhD (doctor of philosophy) or Psy.D. (doctor of psychology) degree that requires four to six years of graduate study. They work in settings such as businesses, schools, mental health centres, and hospitals. Licensing requirements vary in the United States, but most states require psychologists to have postdoctoral training.

Psychiatric social workers have a master’s degree in social work (M.S.W.), usually requiring two years of graduate study. They may work in mental health agencies or medical settings practicing individual therapy or family and marital therapy. Psychiatric social workers make up the single largest group of mental health professionals. Licensing requirements vary in the United States.

Psychiatric nurses are registered nurses who usually have a master’s degree in psychiatric nursing. They often work in a hospital setting conducting individual or group therapy with patients under the supervision of a psychiatrist.

Psychoanalysts specialize in psychoanalysis. Although anyone may use the title of psychoanalyst, those accredited by the International Psychoanalytic Association are usually psychiatrists, psychologists, or social workers who have completed six to ten years of psychoanalytic training. They are also required to undergo a personal analysis themselves.

All but a few states license professional counsellors, usually under the title of licensed professional counsellor or licensed mental health counsellor. The National Board for Certified Counsellors offers certification for counsellors who have a minimum of a master’s degree and who meet the organization’s professional standards.

Members of the clergy—priests, ministers, and rabbis—usually take courses in counselling and psychology as part of their seminary training. Some ministers specialize in pastoral counselling, working with members of a congregation who are in distress.

Any person, even one with no training, can legally use the title of therapist, psychotherapist, or other titles not covered under licensing and certification laws. Therefore, clients should ask therapists who practice under such titles about their academic and professional training.

Psychotherapy encompasses a large number of treatment methods, each developed from different theories about the causes of psychological problems and mental illnesses. There are more than 250 kinds of psychotherapy, but only a fraction of these have found mainstream acceptance. Many kinds of psychotherapy are offshoots of well ~ known approaches or build upon the work of earlier theorists.

In individual therapy, a patient or client meets regularly with a therapist, typically over a period of weeks or months. The methods of therapists vary depending on their theory of personality, or way of understanding another individual. Most therapies can be classified as (1) psychodynamic, (2) humanistic, (3) behavioural, (4) cognitive, or (5) eclectic. In the United States, about 40 percent of therapists consider their approach eclectic, which means they combine techniques from a number of theoretical approaches and often tailor their treatment to the particular psychological problem of a client.

Forms of therapy that treat more than one person at a time include group therapy, family therapy, and couples therapy. These therapies may use techniques from any theoretical approach. Other forms of therapy specialize in treating children or adolescents with psychological problems.

People seeking help for their problems most often select individual therapy over group therapy and other forms of therapy. People may prefer individual therapy because it allows the therapist to focus exclusively on their problems, without distractions from others. Also, individuals may desire more privacy and confidentiality than is possible in a group setting. Sometimes people combine individual therapy and group therapy.

In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. In this 1938 British Broadcasting Corporation interview, Freud recounts the early resistance to his ideas and later acceptance of his work. Freud’s speech is slurred because he was suffering from cancer of the jaw. He died the following year.

Psychodynamic therapies are those therapies in some way derived from the work of Austrian physician Sigmund Freud, the founder of psychoanalysis. In general, psychodynamic therapists emphasize the importance of discovering and resolving internal, unconscious conflicts, often through an exploration of one’s childhood and past experiences. Although psychoanalysis is the best ~ known form of psychodynamic therapy, theorists have developed many other psychodynamic therapies, some very different from Freud’s original techniques.

Sigmund Freud, the founder of psychoanalysis, compared the human mind with an iceberg. The tip above the water represents consciousness, and the vast region below the surface symbolizes the unconscious mind. Of Freud’s three basic personality structures—id, ego, and superego—only the id is totally unconscious.

Freud developed the theory and techniques of psychoanalysis in the 1890s. He believed that much of an individual's personality develops before the age of six. He also proposed that children pass through a series of psychosexual stages, during which they express sexual energy in different ways. For example, during the phallic stage, from about age three to age five, children focus on feelings of pleasure in their genital organs. At this time, according to Freud, boys become sexually attracted to their mothers and feel hostility and jealousy toward their fathers. Similarly, girls develop sexual feelings toward their fathers and feel rage toward their mothers. In Freud’s view, such innate sexual and aggressive drives cause feelings and thoughts that the person regards as unacceptable. In response, the individual represses these feelings, driving them into the unconscious mind. In the process, three basic personality structures are formed: the id, the ego, and the superego. The id represents unchecked, instinctual drives; the superego is the voice of social conscience; and the ego is the rational thinking that mediates between the id and superego and deals with reality. These three systems function as a whole, not separately. Id forces are unconscious and often emerge without an individual’s awareness, causing fear, anxiety, depression, or other distressing symptoms. Freud used the term neurosis to refer to such symptoms.

In psychoanalysis, Freud sought to eliminate neurotic symptoms by bringing the individual’s repressed fantasies, memories, and emotions into consciousness. He placed particular emphasis on helping patients uncover memories about early childhood trauma and conflict, which he regarded as the source of emotional problems in adults. At first, he used hypnosis as a way to gain access to a person’s unconscious. Later he developed free association, a method in which patients say whatever thoughts come to their minds about dreams, fantasies, and memories. The analyst’s interpretations of this material, Freud believed, could provide patients with insight into their unconscious ~ insight that would help them become less anxious, less depressed, or better in other ways.

Freud also placed great value on what could be learned from transference, the patient’s emotional response to the therapist. Freud believed that during therapy, patients transfer repressed feelings toward their family members to their relationship with the therapist. Transference exposes these repressed feelings and allows the patient to work through them. Free association and transference are still central features of Freudian psychoanalysis.

In traditional or classical psychoanalysis, the patient lies on a couch and the therapist sits out of sight of the patient. This practice is intended to minimize the presence of the therapist and allow the patient to engage in free association more easily. Classical psychoanalysis requires three to four sessions of therapy each week for several years. At a rate of $100 or more per session, three sessions per week costs more than $15,000 per year. Classical psychoanalysis is not typically covered by insurance plans with managed mental health care. Therefore, relatively few individuals choose this intensive and long ~ term therapy.

In contemporary forms of psychoanalysis, the duration of therapy is often shorter ~ between one and four years ~ and meetings may take place one or two times a week. Other psychoanalytically oriented therapists work in a brief format of 30 sessions or less. The patient sits on a chair across from the therapist rather than lying on a couch. Modern psychoanalysts tend to focus more on current functioning and make less use of free association techniques.

American psychoanalyst and social philosopher Erich Fromm stressed the importance of social and economic factors on human behaviour. His focus was a departure from traditional psychoanalysis, which emphasized the role of the subconscious. In this 1969 essay for Collier’s Year Book, Fromm presents various explanations for human violence. He argues that violence cannot be controlled by imposing stronger legal penalties, but rather by creating a more just society in which people connect with each other as humans and are able to control their own lives.

Several of Freud's followers developed new theories about the causes of psychological disorders. Three important neo ~ Freudians were Erich Fromm, Karen Horney, and Erik Erikson, who emphasized the role of social and cultural influences in the formation of personality. All three emigrated from Germany to the United States in the 1930s. Their theories have influenced modern psychodynamic therapists.

Fromm believed that the fundamental problem people confront is a sense of isolation deriving from their own separateness. According to Fromm, the goal of therapy is to orient oneself, establish roots, and find security by uniting with other people while remaining a separate individual.

Horney departed from Freud in her belief in the importance of social forces in personality formation. She asserted that people develop anxiety and other psychological problems because of feelings of isolation during childhood and unmet needs for love and respect from their parents. The goal of therapy, in her view, is to help patients overcome anxiety ~ driven neurotic needs and move toward a more realistic image of themselves.

Erikson extended Freud's emphasis on childhood development to cover the entire lifespan. Referred to as an ego psychologist, he emphasized the importance of the ego in helping individuals develop healthy ways to deal with their environment. Often working with children, Erikson helped individuals develop the basic trust and confidence needed for the development of a healthy ego.

Other psychoanalytic therapists focussed on how relationships develop between the child and others, especially the mother. British pediatrician Donald Winnicott and Austrian ~ American pediatrician Margaret Mahler were known as object ~ relations analysts because of their emphasis on the child’s love object (such as the mother or father). They and other object ~ relations therapists, such as Austrian ~ born British psychoanalyst Melanie Klein, helped patients deal with problems that arose from being separated inappropriately or at too early an age or from their mothers.

Swiss psychiatrist Carl Jung began his studies of human motivation in the early 1900s and created the school of psychoanalysis known as analytical psychology. A contemporary of Austrian psychoanalyst Sigmund Freud, Jung at first collaborated closely with Freud but eventually moved on to pursue his own theories, including the exploration of personality types. According to Jung, there are two basic personality types, extroverted and introverted, which alternate equally in the completely normal individual. Jung also believed that the unconscious mind is formed by the personal unconscious (the repressed feelings and thoughts developed during an individual’s life) and the collective unconscious (those inherited feelings, thoughts, and memories shared by all humanity).

Unlike the psychoanalytic therapists, Swiss psychiatrist Carl Jung developed a very different system of therapy. He had worked closely with Freud, but broke away totally from Freud in his own work.

Jung created a school of psychology that he called analytical psychology. He felt that Freud focussed too much on sexual drives and not enough on all of the creative instincts and impulses that motivate individuals. Whereas Freud had described the personal unconscious, which reflected the sum of one person’s experience, Jung added the concept of the collective unconscious, which he defined as the reservoir of the experience of the entire human race. The collective unconscious contains images called archetypes that are common to all individuals. They are often expressed in mythological concepts such as good and evil spirits, fairies, dragons, and gods.

In general, Jungian therapists see psychological problems as arising from unconscious conflicts that create disturbances in psychic energy. They treat psychological problems by helping their patients bring material from their personal and collective unconscious into conscious awareness. The therapists do this through a knowledge of symbolism ~ not only symbols from mythology and folk culture, but also current cultural symbols. By interpreting dreams and other materials, Jungian therapists help their patients become more aware of unconscious processes and become stronger individuals.

Austrian psychologist and psychiatrist Alfred Adler studied under Sigmund Freud, the founder of psychoanalysis, before developing his own theories about human behaviour. Adler’s best ~ known theories stress that individuals are mainly motivated by feelings of inferiority, which he called an inferiority complex.

Like Jung, Austrian physician Alfred Adler believed that Freud overemphasized the importance of sexual and aggressive drives. Adler was particularly interested in sibling relationships, birth order, and relationships with parents. He would ask patients about their early memories and use this information to analyze their attitudes, beliefs, and behaviours. He helped his patients by encouraging them to meet important life goals: love, work, and friendship.

For Adler and modern therapists who draw from his work, interest in others and participation in society are important goals of therapy. Adlerian therapists see therapy in part as educational, and they use a number of innovative action techniques to help patients change mistaken beliefs and interact more fully with family members and others.

Humanistic therapies focus on the client's present rather than past experiences, and on conscious feelings rather than unconscious thoughts. Therapists try to create a caring, supportive atmosphere and to guide clients toward personal realizations and insights. Clients are encouraged to take responsibility for their lives, to accept themselves, and to recognize their own potential for growth and change.

The length of therapy depends on the severity of the problem and on a client's ability to change and try new behaviours. Because humanistic therapies emphasize the relationship between client and therapist and a gradual development of increased responsibility by the client, these therapies typically take a year or two of weekly sessions.

Three of the most influential forms of humanistic therapy are existential therapy, person ~ entered therapy, and Gestalt therapy. 1. Existential Therapy, is based on a philosophical approach to people and their existence, existential therapy deals with important life themes. These themes include living and dying, freedom, responsibility to self and others, finding meaning in life, and dealing with a sense of meaninglessness. More than other kinds of therapists, existential therapists examine individuals' awareness of themselves and their ability to look beyond their immediate problems and daily events to problems of human existence.

The first existential therapists were European psychiatrists trained in psychoanalysis who were dissatisfied with Freud's emphasis on biological drives and unconscious processes. Existential therapists help their clients confront and explore anxiety, loneliness, despair, fear of death, and the feeling that life is meaningless. There are few techniques specific to existential therapy. Therapists normally draw on techniques from a variety of therapies. One well ~ known existential therapy is logotherapy, developed by Austrian psychiatrist Viktor E. Frankl in the 1940s (logos is Greek for meaning).

2. Person ~ Entered Therapy, whereby Carl Rogers in the 1940s and 1950s American psychologist Carl Rogers developed a form of psychotherapy known as person ~ entered therapy. This approach emphasizes that each person has the capacity for self ~ understanding and self ~ healing. The therapist tries to demonstrate empathy and true caring for clients, allowing them to reveal their true feelings without fear of being judged.

Person ~ entered therapy, originally called client ~ entered therapy, is perhaps the best ~ known form of humanistic therapy. American psychologist Carl Rogers developed this type of therapy in the 1940s and 1950s. Rogers believed that people, like other living organisms, are driven by an innate tendency to maintain and enhance themselves, which in turn moves them toward growth, maturity, and life enrichment. Within each person, Rogers believed, is the capacity for self ~ understanding and constructive change.

Person ~ entered therapy emphasizes understanding and caring rather than diagnosis, advice, and persuasion. Rogers strongly believed that the quality of the therapist ~ client relationship influences the success of therapy. He felt that effective therapists must be genuine, accepting, and empathic. A genuine therapist expresses true interest in the client and is open and honest. An accepting therapist cares for the client unconditionally, even if the therapist does not always agree with him or her. An empathic therapist demonstrates a deep understanding of the client's thoughts, ideas, experiences, and feelings and communicates this empathic understanding to the client. Rogers believed that when clients feel unconditional positive regard from a genuine therapist and feel empathically understood, they will be less anxious and more willing to reveal themselves and their weaknesses. By doing so, clients gain a better understanding of their own lives, move toward self ~ acceptance, and can make progress in resolving a wide variety of personal problems.

Person ~ entered therapists use an approach called active listening to demonstrate empathy ~ letting clients know that they are being fully listened to and understood. First, therapists must show through their body position and facial expression that they are paying attention—for example, by directly facing the client and making good eye contact. During the therapy session, the therapist tries to restate what the client has said and seeks clarification of the client’s feelings. The therapist may use such phrases as “What I hear you saying is . . . ” and “You’re feeling like . . . “ The therapist seeks mainly to reflect the client’s statements back to the client accurately, and does not try to analyze, judge, or lead the direction of discussion.

The general goal of Gestalt therapy is awareness of self, others, and the environment that brings about growth, wholeness, and integration of one’s thoughts, feelings, and actions. Gestalt therapists use a wide variety of techniques to make clients more aware of themselves, and they often invent or experiment with techniques that might help to accomplish this goal. One of the best ~ known Gestalt techniques is the empty ~ chair technique, in which an empty chair represents another person or another part of the client’s self. For example, if a client is angry with herself for not being kinder to her mother, the client may pretend her mother is sitting in an empty chair. The client may then express her feelings by speaking in the direction of the chair. Alternatively, the client might play the role of the understanding daughter while sitting in one chair and the angry daughter while sitting in another. As she talks to different parts of herself, differences may be resolved. The empty ~ chair technique reflects Gestalt therapy’s strong emphasis on dealing with problems in the present.

Behavioural therapies differ dramatically from psychodynamic and humanistic therapies. Behavioural therapists do not explore an individual’s thoughts, feelings, dreams, or past experiences. Rather, they focus on the behaviour that is causing distress for their clients. They believe that behaviour of all kinds, both normal and abnormal, is the product of learning. By applying the principles of learning, they help individuals replace distressing behaviours with more appropriate ones.

Typical problems treated with behavioural therapy include alcohol or drug addiction, phobias (such as a fear of heights), and anxiety. Modern behavioural therapists work with other problems, such as depression, by having clients develop specific behavioural goals—such as returning to work, talking with others, or cooking a meal. Because behavioural therapy can work through nonverbal means, it can also help people who would not respond to other forms of therapy. For example, behavioural therapists can teach social and self ~ care skills to children with severe learning disabilities and to individuals with schizophrenia who are out of touch with reality.

Behavioural therapists begin treatment by finding out as much as they can about the client's problem and the circumstances surrounding it. They do not infer causes or look for hidden meanings, but rather focus on observable and measurable behaviours. Therapists may use a number of specific techniques to alter behaviour. These techniques include relaxation training, systematic desensitization, exposure and response prevention, aversive conditioning, and social skills training.

Relaxation training is a method of helping people with high levels of anxiety and stress. It also serves as an important component of some other behavioural treatments.

In one type of relaxation exercise, people learn to tighten and then relax one muscle group at a time. This method, called progressive relaxation, was developed in the 1930s by American physiologist and psychologist Edmund Jacobson. At first, the therapist gives spoken instructions to the client. Later the client can practice relaxation exercises at home using a tape recording of the therapist’s voice. The following example, adapted from Jacobson’s work, illustrates a brief relaxation procedure:

Just settle back as comfortably as you can, close your eyes, and let yourself relax to the best of your ability. Now clench up both fists tighter and tighter and study the tension as you do so. Keep them clenched and feel the tension in your fists, hands, forearms . . . Now relax. Let the fingers of your hands become loose and observe the contrast in your feelings . . . Now let yourself go and try to become more relaxed all over. Take a deep breath. Just let your whole body become more and more relaxed.

Another relaxation technique is meditation. In meditation, people try to relax both the mind and the body. In many forms of meditation, people begin by sitting comfortably on a cushion or chair. Then they gradually relax their body, begin to breathe slowly, and concentrate on a sensation—such as the inhaling and exhaling of breath—or on an image or object. In Transcendental Meditation, a person does not try to concentrate on anything, but merely sits in a quiet atmosphere and repeats a mantra (a specially chosen word) to try to achieve a state of restful alertness.

Participants in a program to overcome a phobia (fear) of flying on aeroplanes get ready to “graduate” by taking a short flight. The program uses a type of behavioural therapy called systematic desensitization, which teaches people to relax in a situation that would normally produce anxiety.

Systematic desensitization, a procedure developed by South African psychiatrist Joseph Wolpe in the 1950s, gradually teaches people to be relaxed in a situation that would otherwise frighten them. It is often used to treat phobias and other anxiety disorders. The word desensitization refers to making people less sensitive to or frightened of certain situations.

In the first step of desensitization, the therapist and client establish an anxiety hierarchy-a list of fear ~ provoking situations arranged in order of how much fear they provoke in the client. For a man afraid of spiders, for example, holding a spider may rank at the top of his anxiety hierarchy, whereas seeing a small picture of a spider may rank at the bottom. In the second step, the therapist has the client relax using one of the relaxation techniques described above. Then the therapist asks the client to imagine each situation on the anxiety hierarchy, beginning with the least ~ feared situation and moving upward. For example, the man may first imagine seeing a picture of a spider, then imagine seeing a real spider from far away, then from a short distance, and so forth. If the client feels anxiety at any stage, he or she is instructed to stop thinking about the situation and to return to a state of deep relaxation. The relaxation and the imagined scene are paired until the client feels no further anxiety. Eventually the client can remain free of anxiety while imagining the most ~ feared situation.

Asking a client to encounter the feared situation is a technique called in vivo exposure. For the man who is afraid of spiders, a therapist might arrange to go to a park or zoo where visitors can touch large spiders. The therapist would model for the client how to approach a spider and how to handle it. The therapist may also encourage the man to walk gradually closer to the spider, reinforcing his progress with praise and reassurance as he does so. The goal for the therapist and patient would be for the man to pick up the spider.

Problems are rarely as clear and simple as fear of spiders. Therapists may spend considerable time deciding on appropriate goals, which ones to pursue first, and then reevaluating or changing goals as therapy progresses. Systematic desensitization typically takes from 10 to 30 sessions, depending on the severity of the problem. In vivo therapies are more direct and may take less time.

Exposure and response prevention is a behavioural technique often used to treat people with obsessive ~ compulsive disorder. In this technique, the therapist exposes the client to the situation that causes obsessive thoughts, but then prevents the client from acting on these thoughts. For example, to treat people who compulsively wash their hands because they fear contamination from germs, a therapist might have them handle something dirty and then prevent them from washing their hands. Therapists have also experimented with exposure and response prevention to treat people with bulimia nervosa, an eating disorder in which people engage in binge eating and afterward force themselves to vomit or, more occasionally, take laxatives. The therapist feeds the bulimic patients small amounts of food but prevents them from binging, taking laxatives, or vomiting.

Behavioural therapists occasionally use a technique called aversive conditioning or aversion therapy. In this method, clients receive an unpleasant stimulus, such as an electric shock, whenever they perform an undesirable behaviour. For example, therapists treating patients with alcoholism may have them ingest the drug disulfiram (Antabuse). The drug makes the patients violently sick if they drink alcohol. Many therapists have found that aversive conditioning is not as effective as other behavioural techniques, and as a result, they use this technique very infrequently. For some problems, however, aversive conditioning can work when all other techniques have failed. For example, therapists have found that immediate application of an unpleasant stimulus can eliminate self ~ mutilation and other self ~ destructive behaviours in children with autism.

Social skills training is a method of helping people who have problems interacting with others. Clients learn basic social skills such as initiating conversations, making eye contact, standing at the appropriate distance, controlling voice volume and pitch, and responding to questions. The therapist first describes and models the behaviour. Then the patient or client practices the behaviour in skits or role ~ playing exercises. The therapist watches the exercises and provides constructive criticism and further modelling. Therapists often conduct this kind of training with groups of people with similar problems. Social skills training can often help people with schizophrenia function more easily in public situations and reduce their risk of relapse or re ~ hospitalization.

One popular form of social skills training is assertiveness training, another technique pioneered by Joseph Wolpe. This technique teaches people, often those who are shy, to make appropriate responses when someone does something to them that seems inappropriate or offensive or violates their rights. For example, if a woman has trouble saying no to a coworker who inappropriately asks her to handle some of his job responsibilities, she may benefit from learning how to become more assertive. In this example, the therapist would model assertive behaviour for the client, who would then role ~ play and rehearse appropriate responses to her coworker.

Cognitive therapies are similar to behavioural therapies in that they focus on specific problems. However, they emphasize changing beliefs and thoughts, rather than observable behaviours. Cognitive therapists believe that irrational beliefs or distorted thinking patterns can cause a variety of serious problems, including depression and chronic anxiety. They try to teach people to think in more rational, constructive ways.

In the mid ~ 1950s American psychologist Albert Ellis developed one of the first cognitive approaches to therapy, rational ~ emotive therapy, now commonly called rational ~ emotive behaviour therapy. Trained in psychoanalysis in the 1940s, Ellis quickly became disillusioned with psychoanalytic methods, viewing them as slow and inefficient. Influenced by Alfred Adler’s work, Ellis came to regard irrational beliefs and illogical thinking as the major cause of most emotional disturbances. In his view, negative events such as losing a job or breaking up with a lover do not by themselves cause depression or anxiety. Rather, emotional disorders result when a person perceives the events in an irrational way, such as by thinking, “I’m a worthless human being.”

Although rational ~ emotive behaviour therapists use many techniques, the most common technique is that of disputing irrational thoughts. First the therapist identifies irrational beliefs by talking with the client about his or her problems. Examples of irrational beliefs, according to Ellis, include the idea that unhappiness is caused by external events, the idea that one must be accepted and loved by everyone, and the idea that one must always be competent and successful to be a worthwhile person.

To dispute the client’s irrational beliefs and longstanding assumptions, rational ~ emotive behaviour therapists often use confrontational techniques. For example, if a student tells the therapist, “I must get an A on this test or I will be a failure in life,” the therapist might say, “Why must you? Do you think your entire career as a student will be through if you get a B?” The therapist helps the client replace irrational thoughts with more reasonable ones, such as “I would like to get an A on the test, but if I don't, I have strategies I can use to do better next time.”

Like Ellis before him, American psychiatrist Aaron T. Beck became disenchanted with psychoanalysis, finding that it often did not help relieve depression for his patients. In the 1960s Beck developed his own form of cognitive therapy for treating depression, and later applied it to other disorders. In Beck’s view, depressed people tend to have negative views of themselves, interpret their experiences negatively, and feel hopeless about their future. He sees these tendencies as a problem of faulty thinking. Like rational ~ emotive behaviour therapists, practitioners of Beck’s technique challenge the client's absolute, extreme statements. They try to help the client identify distorted thinking, such as thinking about negative events in catastrophic terms, and then suggest ways to change this thinking.

Helping individuals changes problematic behaviours, thoughts, or feelings is not an easy task. Therapists have tried many creative approaches to help patients, some of which do not fall neatly into the major categories of psychodynamic, humanistic, behavioural, or cognitive. Two such therapies still in use today are transactional analysis and reality therapy.

In the 1950s and 1960s Canadian ~ American psychiatrist Eric Berne developed a form of therapy he called transactional analysis. Although trained in psychoanalysis, Berne felt that the complexity of psychoanalytic terminology excluded patients from full participation in their own treatment. He developed a theory of personality based on the view that when people interact with each other, they function as either a parent, adult, or child. For example, he would characterize social interactions between two people as parent ~ adult, parent ~ child, adult ~ child, adult ~ adult, and so forth depending on the situation. He referred to social interactions as transactions and to analysis of these interactions as transactional analysis.

In therapy, which is often conducted in groups, patients learn to recognize when they are assuming one of these roles and to understand when being an authoritarian parent or an impulsive child is appropriate or inappropriate. In addition to identifying these roles, clients learn how to change roles in order to behave in more desirable ways.

American psychiatrist William Glasser developed reality therapy in the 1960s, after working with teenage girls in a correctional institution and observing work with severely disturbed schizophrenic patients in a mental hospital. He observed that psychoanalysis did not help many of his patients change their behaviour, even when they understood the sources of it. Glasser felt it was important to help individuals take responsibility for their own lives and to blame others less. Largely because of this emphasis on personal responsibility, his approach has found widespread acceptance among drug ~ and alcohol ~ abuse counsellors, corrections workers, school counsellors, and those working with clients who may be disruptive to others.

Reality therapy is based on the premise that all human behaviour is motivated by fundamental needs and specific wants. The reality therapist first seeks to establish a friendly, trusting relationship with clients in which they can express their needs and wants. Then the therapist helps clients explore the behaviours that created problems for them. Clients are encouraged to examine the consequences of their behaviour and to evaluate how well their behaviour helped them fulfill their wants. The therapist does not accept excuses from clients. Finally, the therapist helps the client formulate a concrete plan of action to change certain behaviours, based on the client’s own goals and ability to make choices.

Currently, many therapists describe their approach as eclectic or integrative, meaning that they use ideas and techniques from a variety of therapies. Many therapists like the opportunity to draw from many theories and not limit themselves to one or two. Most therapists who adopt an eclectic approach have a rationale for which techniques they use with specific clients, rather than just choosing an approach randomly or because it suits them at the time.

One of the most influential eclectic approaches is cognitive ~ behavioural therapy. Other eclectic approaches use other combinations of therapies.

There are almost no pure cognitive or behavioural therapists. Usually therapists combine cognitive and behavioural techniques in an approach known as cognitive ~ behavioural therapy. For example, to treat a woman with depression, a therapist may help her identify irrational thinking patterns that cause the distressing feelings and to replace these irrational thoughts with new ways of thinking. The therapist may also train her in relaxation techniques and have her try new behaviours that help her become more active and less depressed. The client then reports the results back to the therapist.

Cognitive ~ behavioural therapy has rapidly become one of the most popular and influential forms of psychotherapy, in part because it takes a relatively short period of time compared with humanistic and psychoanalytic therapies, and also because of its ability to treat a wide range of problems. Sometimes cognitive ~ behavioural therapy takes only a few sessions, but more often it extends for 20 or 30 sessions over four to six months. The length of therapy usually depends on the severity and number of the client’s problems.

Some therapists have one particular way of understanding clients—that is, they adhere to one theory of personality ~ but use many techniques from a variety of theories. Other therapists may understand clients using two or three theories of personality and only use techniques to bring about change that are consistent with those theories. Some therapists have combined psychodynamic and behavioural therapies in ways to help their clients deal with fears and anxieties but also understand their causes.

Therapists may use different approaches to treat different problems. For example, a therapist might find that clients who are grieving over the loss of a spouse may respond best to a humanistic approach, in which they can share their grieving and their hurts with the therapist. However, the same therapist may use a cognitive ~ behavioural approach with a person who reports being anxious most of the time.

Teenage girls talk with a therapist, top right, during a group therapy session. Group therapy allows people to see how others deal with problems and to receive support and encouragement from group members.

All of the individual therapies can also be used with groups. People may choose group therapy for several reasons. First, group therapy is usually less expensive than individual therapy, because group members share the cost. Group therapy also allows a therapist to provide treatment to more people than would be possible otherwise. Aside from cost and efficiency advantages, group therapy allows people to hear and see how others deal with their problems. In addition, group members receive vital support and encouragement from others in the group. They can try out new ways of behaving in a safe, supportive environment and learn how others perceive them.

Groups also have disadvantages. Individuals spend less time talking about their own problems than they would in one ~ on ~ one therapy. Also, certain group members may interact with other group members in hurtful ways, such as by yelling at them or criticizing them harshly. Generally, therapists try to intercede when group members act in destructive ways. Another disadvantage of group therapy involves confidentiality. Although group members usually promise to treat all therapy discussions as confidential, some group members may worry that other members will share their secrets outside of the group. Group members who believe this may be less willing to disclose all of their problems, lessening the effectiveness of therapy for them.

Groups vary widely in how they work. The typical group size is from six to ten people with one or two therapists. Often two therapists prefer to work together in a group so that they can respond not only to one person’s issues, but also to discussions between group members that may be occurring quickly. Some groups are open or drop ~ in groups ~ new clients may join at any time and members may attend or skip whatever sessions they desire. Other groups are closed and admit new members only when all members agree. Regular attendance is usually required in these groups. In closed groups, both the therapist and group members will ask a member to provide an explanation for missing a meeting.

When forming a group, therapists try to make clear to potential participants the goals of the group and for whom it is appropriate. Therapists will often screen potential participants to learn about their problems and decide whether the group is right for them. Sometimes therapists prefer diversity among group members in terms of age, gender, and problem. In other cases, therapists may limit membership in a group to individuals with similar problems and backgrounds. For example, some groups may form specifically for individuals who are grieving the loss of a loved one, individuals who abuse drugs or alcohol, people with eating disorders, people suffering from depression, or troubled elderly individuals.

The techniques used in group therapy depend largely on the theoretical orientation of the therapist. Humanistic therapists tend to respond to the feelings and experiences of other members. They may also interpret or comment on social interactions between group members. In cognitive ~ behavioural groups, group members try to change their own thoughts and behaviours and support and encourage other members to do the same. Psychoanalytic groups focus on childhood experiences and their impact on participants’ current behaviours, thoughts, and feelings.

Psychodrama, the first form of group therapy, was developed in the 1920s by Jacob L. Moreno, an Austrian psychiatrist. Moreno brought his method to the United States in 1925, and its use spread to other parts of the world. Participants in psychodrama act out their problems—often on a real stage and with props—as a means of heightening their awareness of them. The therapist serves as the director, suggesting how participants might act out problems and assigning roles to other group members. For example, a woman might reenact a scene from her childhood with other group members playing her father, mother, brother, or sister. Groups who use psychodrama may do so weekly or simply as a one ~ time demonstration.

A self ~ help group or support group involves people with a common problem who meet regularly to share their experiences, support each other emotionally, and encourage change or recovery. They are usually free of charge to interested participants. Self ~ help groups are not strictly considered psychotherapy because they are not led by a licensed mental health professional. However, they can serve as an important source of help for people in emotional distress.

There are thousands of self ~ help and support groups in the United States and Canada. The oldest and best known is Alcoholics Anonymous, which uses a 12 ~ step program to treat alcoholism. Other groups have formed for cancer patients, parents whose children have been murdered, compulsive gamblers, battered women, obese people, and many other types of people.

Family therapy involves the participation of one or more members of the same family who seek help for troubled family relationships or the problems of individual family members. Typical problems that bring families into family therapy are delinquent behaviour by a child or adolescent, a child’s poor performance in school, hostilities between a parent and child or between siblings, and severe psychological disturbance or mental illness in a parent or child.

One of the most influential forms of family therapy, family systems therapy, views the family as a single, complex system or unit. Individual members are interdependent parts of the system. Rather than treating one person’s symptoms in isolation, therapists try to understand the symptoms in the larger context of the family. For example, a boy who begins picking fights with classmates might do so to get more attention from his busy parents. Therapists work from the rationale that current family relationships profoundly affect, and are affected by, an individual family member’s psychological problems. For this reason, most family therapists prefer to work with the entire family during a session, rather than meeting with family members individually.

In most family therapy sessions, the therapist encourages family members to air their feelings, frustrations, and hostilities. By observing how they interact, the therapist can help them recognize their roles and relationships with each other. The therapist tries to avoid assigning blame to any particular family member. Instead, the therapist makes suggestions about how family members might adjust their roles and prevent future conflict.

Couples therapy, also called marital therapy or marriage counselling, is designed to help intimate partners improve their relationship. Therapists treat married couples as well as unmarried couples of the opposite or same sex. Therapists normally hold sessions with both partners present. At certain times during therapy, however, the therapist may choose to see the partners individually.

Couples may seek therapy for a variety of problems, many of which concern a breakdown of communication or trust between the partners. For example, an extramarital affair by one partner may cause the other partner to feel emotional pain, anger, and distrust. Some partners may feel distant from one another or experience sexual problems. In other cases, one or both partners may have psychological problems or alcohol or drug problems that negatively affect their relationship.

The techniques used in therapy vary depending on the theoretical orientation of the therapist and the nature of the couple’s problem. Most often, therapists focus on improving communication between partners and on helping them learn to manage conflict. By observing the partners as they talk to each other, the therapist can learn about their communication patterns and the roles they assume in their relationship. The therapist may then teach the partners new ways of expressing their feelings verbally, how to listen to each other, and how to work together to solve problems. The therapist may also suggest that they try out new roles. For example, if one partner makes all of the decisions in the relationship, the therapist may encourage the couple to try sharing decision ~ making power.

Because most couples therapists also have training in family therapy, they often examine the influence of the couple’s relationships with parents, children, and siblings. Psychoanalytically oriented therapists may focus on how the partners’ childhood experiences affect their current relationship with each other. For couples who cannot work through their differences or reestablish trust and intimacy, separation or divorce may be the best choice. Therapists can help such partners separate in constructive ways.

Some psychotherapists specialize in working with children. Therapists deal with children who are anxious, depressed, or have difficulty getting along with others at home or school. Some children have psychological problems resulting from family issues such as divorce, new stepparents, single ~ parent homes, death of a parent or sibling, being homeless, or being raised in an alcoholic family. Other children have emotional problems related to physical disabilities, learning disabilities, or attention ~ deficit hyperactivity disorder.

Play therapy is a special technique that therapists often use with children aged 2 to 12. For children, play is a natural way of learning and relating to others. Play therapy can help therapists both to understand children's problems and to help children deal with their feelings, behaviours, and thoughts. Therapists may use playhouses, puppets, a toy telephone, dolls, sandboxes, food, finger paints, and other toys or objects to help children express their thoughts and feelings. In addition to projecting a caring and gentle manner, therapists who work with children are trained to understand and interpret children’s nonverbal and verbal expressions.

For most people, psychotherapy involves a common sequence of events: finding a therapist, assessing the problem, exploring the problem, resolving the problem, and terminating therapy. Sometimes therapy will end prematurely, before the problem is resolved. For example, the therapist or client may move to a new city.

When someone has a personal problem and seeks help from a therapist, the individual may turn to a variety of people to get a referral—a friend, a pastor or rabbi, or a family physician. Phone books list associations of psychologists, psychiatrists, and social workers that can also provide referrals to therapists. As noted earlier, however, some health insurance plans may restrict a person’s choice of therapist.

When prospective clients call a therapist for an appointment, they may discuss several aspects of therapy. One concern is availability—is the therapist taking on new patients? Are there hours when both patient and therapist can meet? Another issue is fees. Both therapists in private practice and those in community mental health agencies have to negotiate fees depending in part on the client’s health insurance plan. Some agencies do not require health insurance and have very low fees or a sliding scale that sets fees depending on the ability of the client to pay.

During the first meeting, clients try to explain their problems to the therapist. The therapist usually asks about the nature of the problems, what may make the problems better or worse, and how long the problems have existed. For many therapists, hearing details, even small ones, helps them to assess the problems and to decide the best form of treatment. Some therapists collaborate with clients in deciding the goals of therapy and what treatment methods will be used. Assessment does not stop with the first session, but continues through therapy. Occasionally, goals of therapy change upon assessment of new issues or problems.

During therapy, the client sits across from the therapist—except in classical psychoanalysis, in which the client lies on a couch. The specific nature of the discussions between therapist and client differs greatly depending on the therapist’s theoretical orientation. Some therapists are interested in unconscious forces and the early childhood years of the client (psychodynamic therapy), others in actions of the client (behavioural therapy), others in the client’s thinking patterns (cognitive therapy), and yet others in all or some of these aspects. Therapists often take notes during a session or make notes after the session has ended. Sessions typically last from 45 to 50 minutes, although therapists may hold longer sessions during the initial stages of treatment. Clients typically meet weekly with the therapist, although some may meet twice a week or more.

When does therapy end? Clients and therapists discuss this issue together and determine when it is best to stop. Ideally their decision depends on their judgments about the client’s degree of progress and improvement. Some clients may find that therapy does not seem to be making progress, and may decide to change therapists. However, the cost of therapy may also factor in the decision to end therapy. Managed ~ care companies generally limit the number of sessions they will subsidize to between 15 and 20. Some therapists, especially those in private practice, may arrange to go beyond these limits by negotiating a fee that the client will pay for services. In other cases, the therapist may refer the client to other mental health agencies that have lower fees and do not require insurance. At the end of therapy, the therapist may schedule a follow ~ up session several months later to check the client’s progress. Also, the therapist and client agree on what to do if the client’s problems recur.

Almost since the inception of psychotherapy, therapists and their clients have asked, “Does it work? Does psychotherapy help people resolve their problems, feel better, and change the way they deal with other people?” Therapists and clients are not the only ones asking these questions. In recent years, the agencies that fund mental health services—health insurance companies, health maintenance organizations, and government organizations—have increased their scrutiny of the effectiveness of various psychotherapies in an effort to contain costs.

Measuring the effectiveness of psychotherapy is an extremely complex task. Asking psychotherapists or their clients, “How helpful has therapy been?” is only a start? The answer does provide some information about how therapists and their clients perceive therapy. However, it does not answer the question of whether psychotherapy is effective because both therapists and clients have vested interests in believing that therapy succeeded. Therapists want to uphold their professional reputation and sense of competence, and clients want to feel that their investment of time and money has been worthwhile. Because of these biases, most studies of effectiveness rely on other evaluations of a client’s improvement: psychological tests given before and after treatment, reports from the client’s friends and family, and reports from impartial interviewers who do not know the client or whether the client received any therapy.

In 1952 British psychologist Hans Eysenck reviewed the results of 24 studies of psychotherapy and came to a controversial conclusion: Although two ~ thirds of patients who received psychotherapy showed improvement, a roughly equal proportion of patients who had been on a waiting list for therapy improved with no treatment. According to Eysenck, the patients on the waiting list showed spontaneous remission—recovery without treatment. Although researchers soon exposed flaws in his analysis and problems with the original studies, Eysenck’s findings touched off hundreds of new studies on the effectiveness of psychotherapy.

In 1980 American researchers statistically combined the results of 475 studies on psychotherapy outcomes using a technique known as meta ~ analysis. Their study found that the average psychotherapy recipient showed more improvement than 80 percent of untreated individuals. Later studies have confirmed that overall, psychotherapy is better than no therapy at all. Furthermore, it appears at least as effective as drug treatment for most psychological problems. However, psychotherapy is not effective for everyone. About 10 percent of people who receive psychotherapy show no improvement or actually get worse.

Researchers have also studied how quickly people improve with psychotherapy. One analysis, which reviewed data from more than 2400 psychotherapy patients, found that 50 percent of people receiving once ~ a ~ week psychotherapy showed significant improvement after eight sessions, or two months. After six months, or 26 sessions, about 75 percent of people show improvement. However, most people required about a year of psychotherapy for relief from severe symptoms, such as feelings of worthlessness.

Are some types of psychotherapy more effective than others? This question has been hotly debated for decades, and research on this issue presents many difficulties. In conducting studies that compare different therapies, researchers seek to make sure that each treatment group is as similar as possible. For example, researchers may limit the groups to people with the same severity of depression. In addition, within each treatment group, researchers try to make sure that therapists are using the same techniques and are trained similarly. However, patients do not come to therapy with simple problems that fit easily into studies. Furthermore, therapists of the same theoretical orientation may vary in their techniques and in the skillfulness with which they apply them.

Because of these problems, there is no conclusive answer about which type of therapy is best. Most studies have failed to demonstrate that any one approach is superior to another. The meta ~ analysis of 475 studies mentioned earlier, for example, found that psychodynamic, humanistic, behavioural, and cognitive approaches were all about equally effective. In the 1990s a major study by the National Institute of Mental Health compared the effectiveness of cognitive ~ behavioural therapy, interpersonal psychotherapy (a form of short ~ term psychodynamic therapy that focuses on social relations), and drug therapy for people with depression. The study found that all three types of treatment helped individuals become less depressed. Furthermore, no one method was significantly more effective than the others.

Some researchers suggest that all therapies share certain qualities, and that these qualities account for the similar effectiveness of therapies despite quite different techniques. For instance, all therapies offer people hope for recovery. People who begin therapy often expect that therapy will help them, and this expectation alone may lead to some improvement (a phenomenon known as the placebo effect). Also, people in psychotherapy may find that simply being able to talk freely and openly about their problems helps them to feel better. Finally, the support, encouragement, and warmth that clients feel from their therapist lets them know they are cared about and respected, which may positively affect their mental health.

Although different therapeutic approaches may be equally effective on average, mental health researchers agree that some types of therapy are best for particular problems. For panic disorder and phobias, behavioural and cognitive ~ behavioural therapies seem most effective. Behavioural techniques, often in combination with medication, are also an effective treatment for obsessive ~ compulsive disorder, post ~ traumatic stress disorder, generalized anxiety disorder, and sexual dysfunction. Cognitive ~ behavioural, psychodynamic, and humanistic approaches all provide moderate relief from depression.

Mental health professionals agree that the effectiveness of therapy depends to a large extent on the quality of the relationship between the client and therapist. In general, the better the rapport is between therapist and client, the better the outcome of therapy. If a person does not trust a therapist enough to describe deeply personal problems, the therapist will have trouble helping the person change and improve. For clients, trusting that the therapist can provide help for their problems is essential for making progress.

The founder of person ~ entered therapy, Carl Rogers, believed that the most important qualities in a therapist are being genuine, accepting, and empathic. Almost all therapists today would agree that these qualities are important. Being genuine means that therapists care for the client and behave toward the client as they really feel. Being accepting means that therapists should appreciate clients for who they are, despite the things that they may have done. Therapists do not have to agree with clients, but they must accept them. Being empathic means that therapists understand the client’s feelings and experiences and convey this understanding back to the client.

In helping their clients, all therapists follow a code of ethics. First, all therapy is confidential. Therapists notify others of a client’s disclosures only in exceptional cases, such as when children disclose abuse by parents, parents disclose abuse of children, or clients disclose an intention to harm themselves or others. Also, therapists avoid dual relationships with clients - that is, being friends outside of therapy or maintaining a business relationship. Such relationships may reduce the therapist’s objectivity and ability to work with the client. Ethical therapists also do not engage in sexual relationships with clients, and do not accept as clients people with whom they have been sexually intimate.

As more immigrants to the United States and Canada have entered therapy, psychotherapists and counsellors have learned the importance of taking a client’s cultural background into account when assessing the problem and determining treatment. Scholars recognize that most psychotherapies are based on Western systems of psychology, which stress the desirability of individualism and independence. However, cultures of Asia and other regions commonly emphasize different values, such as conformity, dependency on others, and obeying one’s parents. Thus, techniques that might be effective for someone from North America, Europe, or Australia might be inappropriate for a recent immigrant from Vietnam, Japan, or India. In order to provide effective treatment, therapists must be aware of their own cultural biases and become familiar with their client’s ethnic and cultural background.

Viktor Frankl (1905~1997), the Austrian psychiatrist who developed a form of existential psychotherapy known as logotherapy. Logotherapy is based on Frankl’s theory that the underlying need of human existence is to find meaning in life (logos is a Greek word for “meaning”).

Born in Vienna, Austria, Frankl was educated at the University of Vienna, where he earned a medical degree in 1930. In 1942 Frankl and his family, who were Jewish, were arrested by the Nazis and imprisoned in concentration camps. Frankl’s mother, father, brother, and pregnant wife were all killed in the camps. Frankl spent the next three years at Auschwitz, Dachau, and other concentration camps. During his imprisonment, Frankl helped despairing prisoners maintain their psychological health. He also recorded, on stolen bits of paper, his theories and experiences, which he later made use of in his books. After his release, Frankl returned to Vienna and became professor of neurology and psychiatry at the University of Vienna Medical School, a position he retained for the rest of his career.

In his best ~ known book, Man's Search for Meaning: An Introduction to Logotherapy (1962; translated into English, 1970), Frankl described how he and other prisoners in the concentration camps found meaning in their lives and summoned the will to survive. The remainder of the book outlines the theory and practice of logotherapy. In addition to its influence on the field of psychotherapy, Man’s Search for Meaning found an enormous readership among the general public. By the time of Frankl’s death, it had sold more than 10 million copies in 24 languages. Frankl published 31 other books on his psychological theories.

The estranged assimilation that gives reason to displace of itself into what is called a Bipolar disorder, categorize mental illness in which a person’s mood alternates between extreme mania and depression. Bipolar disorder is also called manic ~ depressive illness. When manic, people with bipolar disorder feel intensely elated, self ~ important, energetic, and irritable. When depressed, they experience painful sadness, negative thinking, and indifference to things that used to bring them happiness.

American psychiatrist Kay Redfield Jamison is regarded as one of the world’s leading authorities on bipolar disorder, also known as manic ~ depressive illness. In her book An Unquiet Mind: A Memoir of Moods and Madness (1995), Jamison reveals her own struggle against the illness, which caused her to experience violent mood swings. In this excerpt, she describes her initial resistance to taking medication that, while necessary to prevent debilitating depression, extinguished the exhilarating highs of mania.

These positron emission tomography scans of the brain of a person with bipolar disorder show the individual shifting from depression, top row, to mania, middle row, and back to depression, bottom row, over the course of 10 days. Blue and green indicate low levels of brain activity, while red, orange, and yellow indicate high levels of brain activity.

American author Ernest Hemingway suffered from bipolar disorder (manic ~ depressive illness) and committed suicide at the age of 61, during a period of depression. The author’s father, brother, and a sister all committed suicide, and in 1996 Hemingway’s granddaughter, American actor and model Margaux Hemingway, also committed suicide. Scientific research on suicide suggests that genetic and biological factors play a role in suicidal behaviour.

English novelist Virginia Woolf, who suffered from bipolar disorder, recognized that her extremes of mood contributed to her creativity. She wrote of her disorder: “As an experience, madness is terrific . . . and in its lava I still find most of the things I write about.” In 1941, feeling that she could no longer fight the disease, Woolf drowned herself in the Ouse River.

Bipolar disorder is much less common than depression. In North America and Europe, about 1 percent of people experience bipolar disorder during their lives. Rates of bipolar disorder are similar throughout the world. In comparison, at least 8 percent of people experience serious depression during their lives. Bipolar disorder affects men and women about equally and is somewhat more common in higher socioeconomic classes. At least 15 percent of people with bipolar disorder commit suicide. This rate roughly equals the rate for people with major depression, the most severe form of depression.

Bipolar disorder is a mental illness that causes mood swings. In the manic phase, a person might feel ecstatic, self ~ important, and energetic. But when the person becomes depressed, the mood shifts to extreme sadness, negative thinking, and apathy. Some studies indicate that the disease occurs at unusually high rates in creative people, such as artists, writers, and musicians. But some researchers contend that the methodology of these studies was flawed and their results were misleading. In this October 1996 Discover magazine article, anthropologist Jo Ann C. Gutin presents the results of several studies that explore the link between creativity and mental illness.

Some research suggests that highly creative people—such as artists, composers, writers, and poets—show unusually high rates of bipolar disorder, and that periods of mania fuel their creativity. Famous artists and writers who probably suffered from bipolar disorder include poets Lord Byron and Anne Sexton, novelists Virginia Woolf and Ernest Hemingway, composers Peter Ilyich Tchaikovsky and Sergey Rachmaninoff, and painters Amedeo Modigliani and Jackson Pollock. Critics of this research note that many creative people do not suffer from bipolar disorder, and that most people with bipolar disorder are not especially creative.

Bipolar disorder usually begins in a person’s late teens or 20s. Men usually experience mania as the first mood episode, whereas women typically experience depression first. Episodes of mania and depression usually last from several weeks to several months. On average, people with untreated bipolar disorder experience four episodes of mania or depression over any ten ~ year period. Many people with bipolar disorder function normally between episodes. In “rapid ~ cycling” bipolar disorder, however, which represents 5 to 15 percent of all cases, a person experiences four or more mood episodes within a year and may have little or no normal functioning in between episodes. In rare cases, swings between mania and depression occur over a period of days.

In another type of bipolar disorder, a person experiences major depression and hypomanic episodes, or episodes of milder mania. In a related disorder called cyclothymic disorder, a person’s mood alternates between mild depression and mild mania. Some people with cyclothymic disorder later develop full ~ blown bipolar disorder. Bipolar disorder may also follow a seasonal pattern, with a person typically experiencing depression in the fall and winter and mania in the spring or summer. People in the depressive phase of bipolar disorder feel intensely sad or profoundly indifferent to work, activities, and people that once brought them pleasure. They think slowly, concentrate poorly, feel tired, and experience changes ~ usually an increase ~ in their appetite and sleep. They often feel a sense of worthlessness or helplessness. In addition, they may feel pessimistic or hopeless about the future and may think about or attempt suicide. In some cases of severe depression, people may experience psychotic symptoms, such as delusions (false beliefs) or hallucinations (false sensory perceptions).

In the manic phase of bipolar disorder, people feel intensely and inappropriately happy, self ~ important, and irritable. In this highly energized state they sleep less, have racing thoughts, and talk in rapid ~ fire speech that goes off in many directions. They have inflated self ~ esteem and confidence and may even have delusions of grandeur. Mania may make people impatient and abrasive, and when frustrated, physically abusive. They often behave in socially inappropriate ways, think irrationally, and show impaired judgment. For example, they may take aeroplane trips all over the country, make indecent sexual advances, and formulate grandiose plans involving indiscriminate investments of money. The self ~ destructive behaviour of mania includes excessive gambling, buying outrageously expensive gifts, abusing alcohol or other drugs, and provoking confrontations with obnoxious or combative behaviour.

Clinical depression is one of the most common forms of mental illness. Although depression can be treated with psychotherapy, many scientists believe there are biological causes for the disease. In this June 1998 Scientific American article, neurobiologist Charles B. Nemeroff discusses the connection between biochemical changes in the brain and depression.

The genes that a person inherits seem to have a strong influence on whether the person will develop bipolar disorder. Studies of twins provide evidence for this genetic influence. Among genetically identical twins where one twin has bipolar disorder, the other twin has the disorder in more than 70 percent of cases. But among pairs of fraternal twins, who have about half their genes in common, both twins have bipolar disorder in less than 15 percent of cases in which one twin has the disorder. The degree of genetic similarity seems to account for the difference between identical and fraternal twins. Further evidence for a genetic influence comes from studies of adopted children with bipolar disorder. These studies show that biological relatives of the children have a higher incidence of bipolar disorder than do people in the general population. Thus, bipolar disorder seems to run in families for genetic reasons.

Personal or work ~ related stress can trigger a manic episode, but this usually occurs in people with a genetic vulnerability. Other factors ~ such as prenatal development, childhood experiences, and social conditions—seem to have relatively little influence in causing bipolar disorder. One study examined the children of identical twins in which only one member of each pair of twins had bipolar disorder. The study found that regardless of whether the parent had bipolar disorder or not, all of the children had the same high 10 ~ percent rate of bipolar disorder. This observation clearly suggests that risk for bipolar illness comes from genetic influence, not from exposure to a parent’s bipolar illness or from family problems caused by that illness.

Different therapies may shorten, delay, or even prevent the extreme moods caused by bipolar disorder. Lithium carbonate, a natural mineral salt, can help control both mania and depression in bipolar disorder. The drug generally takes two to three weeks to become effective. People with bipolar disorder may take lithium during periods of relatively normal mood to delay or prevent subsequent episodes of mania or depression. Common side effects of lithium include nausea, increased thirst and urination, vertigo, loss of appetite, and muscle weakness. In addition, long ~ term use can impair functioning of the kidneys. For this reason, doctors do not prescribe lithium to bipolar patients with kidney disease. Many people find the side effects so unpleasant that they stop taking the medication, which often results in relapse.

From 20 to 40 percent of people do not respond to lithium therapy. For these people, two anticonvulsant drugs may help dampen severe manic episodes: carbamazepine (Tegretol) and valproate (Depakene). The use of traditional antidepressants to treat bipolar disorder carries risks of triggering a manic episode or a rapid ~ cycling pattern.

Psychiatry is the branch of medicine specializing in mental illnesses. Psychiatrists not only diagnose and treat these disorders but also conduct research directed at understanding and preventing them.

A psychiatrist is a doctor of medicine who has had four years of postgraduate training in psychiatry. Many psychiatrists take further training in psychoanalysis, child psychiatry, or other subspecialties. Psychiatrists treat patients in private practice, in general hospitals, or in specialized facilities for the mentally ill (psychiatric hospitals, outpatient clinics, or community mental health centres). Some spend part or all of their time doing research or administering mental health programs. By contrast, psychologists, who often work closely with psychiatrists and treat many of the same kinds of patients, are not trained in medicine; consequently, they neither diagnose physical illness nor administer drugs.

The province of psychiatry is unusually broad for a medical specialty. Mental disorders may affect most aspects of a patient's life, including physical functioning, behaviour, emotions, thought, perception, interpersonal relationships, sexuality, work, and play. These disorders are caused by a poorly understood combination of biological, psychological, and social determinants. Psychiatry's task is to account for the diverse sources and manifestations of mental illness.

Physicians in the Western world began specializing in the treatment of the mentally ill in the 19th century. Known as alienists, psychiatrists of that era worked in large asylums, practicing what was then called moral treatment, a humane approach aimed at quieting mental turmoil and restoring reason. During the second half of the century, psychiatrists abandoned this mode of treatment and, with it, the tacit recognition that mental illness is caused by both psychological and social influences. For a while, their attention focussed almost exclusively on biological factors. Drugs and other forms of somatic (physical) treatment were common. The German psychiatrist Emil Kraepelin identified and classified mental disorders into a system that is the foundation for modern diagnostic practices. Another important figure was the Swiss psychiatrist Eugen Bleuler, who coined the word schizophrenia and described its characteristics.

The discovery of unconscious sources of behaviour—an insight dominated by the psychoanalytic writings of Sigmund Freud in the early 20th century—enriched psychiatric thought and changed the direction of its practice. Attention shifted to processes within the individual psyche, and psychoanalysis came to be regarded as the preferred mode of treatment for most mental disorders. In the 1940s and 1950s emphasis shifted again: this time to the social and physical environment. Many psychiatrists had all but ignored biological influences, but others were studying those involved in mental illness and were using somatic forms of treatment such as electroconvulsive therapy (electric shock) and psychosurgery.

Dramatic changes in the treatment of the mentally ill in the United States began in the mid ~ 1950s with the introduction of the first effective drugs for treating psychotic symptoms. Along with drug treatment, new, more liberal and humane policies and treatment strategies were introduced into mental hospitals. More and more patients were treated in community settings in the 1960s and 1970s. Support for mental health research led to significant new discoveries, especially in the understanding of genetic and biochemical determinants in mental illness and the functioning of the brain. Thus, by the 1980s, psychiatry had once again shifted in emphasis to the biological, to the relative neglect of psychosocial influences in mental health and illness.

Psychiatrists use a variety of methods to detect specific disorders in their patients. The most fundamental is the psychiatric interview, during which the patient's psychiatric history is taken and mental status is evaluated. The psychiatric history is a picture of the patient's personality characteristics, relationships with others, and past and present experience with psychiatric problems—all told in the patient's words (sometimes supplemented by comments from other family members). Psychiatrists use mental ~ status examinations much as internists use physical examinations. They elicit and classify aspects of the patient's mental functioning.

Some diagnostic methods rely on testing by other specialists. Psychologists administer intelligence and personality tests, as well as tests designed to detect damage to the brain or other parts of the central nervous system. Neurologists also test psychiatric patients for evidence of impairment of the nervous system. Other physicians sometimes examine patients who complain of physical symptoms. Psychiatric social workers explore family and community problems. The psychiatrist integrates all this information in making a diagnosis according to criteria established by the psychiatric profession.

Psychiatric treatments fall into two classes: organic and nonorganic forms. Organic treatments, such as drugs, are those that affect the body directly. Nonorganic types of treatment improve the patient's functioning by psychological means, such as psychotherapy, or by altering the social environment.

Psychotropic drugs are by far the most commonly used organic treatment. The first to be discovered were the antipsychotics, used primarily to treat schizophrenia. The phenothiazines are the most frequently prescribed class of antipsychotic drugs. Others are the thioxanthenes, butyrophenones, and indoles. All antipsychotic drugs diminish such symptoms as delusions, hallucinations, and thought disorder. Because they can reduce agitation, they are sometimes used to control manic excitement in manic ~ depressive patients and to calm geriatric patients. Some childhood behaviour disorders respond to these drugs.

Despite their value, the antipsychotic drugs have drawbacks. The most serious is the neurological condition tardive dyskinesia, which occurs in patients who have taken the drugs over extended periods. The condition is characterized by abnormal movements of the tongue, mouth, and body. It is especially serious because its symptoms do not always disappear when the drug is stopped, and no known treatment for it has been developed.

Most psychotropic drugs are chemically synthesized. Lithium carbonate, however, is a naturally occurring element used to prevent, or at least reduce, the severity of shifts of mood in manic ~ depression. It is especially effective in controlling mania. Psychiatrists must monitor lithium dosages carefully, because only a small margin exists between an effective dose and a toxic one.

Three major classes of antidepressant drugs are used. The tricyclic and tetracyclic antidepressants, the most frequently prescribed, are used for the most common form of serious depression. Monoamine oxidase (MAO) inhibitors are used for so ~ called atypical depressions. Serotonin ~ selective reuptake inhibitors (SSRIs) are effective against both typical and atypical depressions. Although all three classes are quite effective in relieving depression in correctly matched patients, they also have disadvantages. The tricyclics and tetracyclics can take two to five weeks to become effective and can cause such side effects as oversedation and cardiac problems. MAO inhibitors can cause severe hypertension in patients who ingest certain types of food (such as cheese, beer, and wine) or drugs (such as cold medicines). SSRI drugs, such as fluoxetine (Prozac), take 2 to 12 weeks to become effective and can cause headaches, nausea, insomnia, and nervousness.

Anxiety, tension, and insomnia are often treated with drugs that are commonly called minor tranquillizers. Barbiturates have been used for the longest time, but they produce more severe side effects and are more often abused than the newer classes of antianxiety drugs. Of the new drugs, the benzodiazepines are the most frequently prescribed, very often in nonpsychiatric settings.

The stimulant drugs, such as amphetamine—a drug that is often abused—have legitimate uses in psychiatry. They help to control overactivity and lack of concentration in hyperactive children (see Hyperactivity) and to stimulate the victims of narcolepsy, a disorder characterized by sudden, uncontrollable episodes of sleep.

Another organic treatment is electroconvulsive therapy, or ECT, in which seizures similar to those of epilepsy are produced by a current of electricity passed through the forehead. ECT is most commonly used to treat severe depressions that have not responded to drug treatment. It is also sometimes used to treat schizophrenia. Other forms of organic treatment are much less frequently used than drugs and ETC. They include the controversial technique psychosurgery, in which fibres in the brain are severed; this technique is now used very rarely.

A psychologist listens to her client during a psychotherapy session. Psychotherapy can be an effective treatment for many mental disorders. Some forms of psychotherapy try to help people resolve their internal, unconscious conflicts, and other forms teach people skills to correct their abnormal behaviour.

The most common nonorganic treatment is psychotherapy. Most psychotherapies conducted by psychiatrists are psychodynamic in orientation—that is, they focus on internal psychic conflict and its resolution as a means of restoring mental health. The prototypical psychodynamic therapy is psychoanalysis, which is aimed at untangling the sources of unconscious conflict in the past and restructuring the patient's personality. Psychoanalysis is the treatment in which the patient lies on a couch, with the psychoanalyst out of sight, and says whatever comes to mind. The patient relates dreams, fantasies, and memories, along with thoughts and feelings associated with them. The analyst helps the patient interpret these associations and the meaning of the patient's relationship to the analyst. Because it is lengthy and expensive, often several years in duration, that classical psychoanalysis is now infrequently used.

More common are shorter forms of psychotherapy that supplement psychoanalytic principles with other theoretical ideas and scientifically derived information. In these types of therapy, psychiatrists are more likely to give the patient advice and try to influence behaviour. Some use techniques derived from behaviour therapy, which is based on learning theory (although these methods are more commonly used by psychologists).

Besides psychotherapy, the other major form of nonorganic treatment used in psychiatry is milieu therapy. Usually carried out in psychiatric wards, milieu therapy directs social relations among patients and staff toward therapeutic ends. Ward activities, too, are planned to serve specific therapeutic goals.

In general, psychotherapy is relied on more heavily for the treatment of neuroses and other nonpsychotic conditions than it is for psychoses. In psychotic patients, who usually receive psychoactive drugs, psychotherapy is used to improve social and vocational functioning. Milieu therapy is limited to hospitalized patients. Increasingly, psychiatrists use a combination of organic and nonorganic techniques for all patients, depending on their diagnosis and response to treatment.

Depression is an unfortunate aspect that is dealt within the realms of mental illness under which a person experiences deep, unshakable sadness and diminished interest in nearly all activities. People also use the term depression to describe the temporary sadness, loneliness, or blues that everyone feels from time to time. In contrast to normal sadness, severe depression, also called major depression, can dramatically impair a person’s ability to function in social situations and at work. People with major depression often have feelings of despair, hopelessness, and worthlessness, as well as thoughts of committing suicide.

These positron emission tomography scans of the brain of a person with bipolar disorder show the individual shifting from depression, top row, to mania, middle row, and back to depression, bottom row, over the course of 10 days. Blue and green indicate low levels of brain activity, while red, orange, and yellow indicate high levels of brain activity.

Depression can take several other forms. In bipolar disorder, sometimes called manic ~ depressive illness, a person’s mood swings back and forth between depression and mania. People with seasonal affective disorder typically suffer from depression only during autumn and winter, when there are fewer hours of daylight. In dysthymia (pronounced dis ~ THI ~ mee ~ uh), people feel depressed, have low self ~ esteem, and concentrate poorly most of the time—often for a period of years—but their symptoms are milder than in major depression. Some people with dysthymia experience occasional episodes of major depression. Mental health professionals use the term clinical depression to refer to any of the above forms of depression.

Bipolar disorder is a mental illness that causes mood swings. In the manic phase, a person might feel ecstatic, self ~ important, and energetic. But when the person becomes depressed, the mood shifts to extreme sadness, negative thinking, and apathy. Some studies indicate that the disease occurs at unusually high rates in creative people, such as artists, writers, and musicians. But some researchers contend that the methodology of these studies was flawed and their results were misleading. In this October 1996 Discover magazine article, anthropologist Jo Ann C. Gutin presents the results of several studies that explore the link between creativity and mental illness.

Surveys indicate that people commonly view depression as a sign of personal weakness, but psychiatrists and psychologists view it as a real illness. In the United States, the National Institute of Mental Health has estimated that depression costs society many billions of dollars each year, mostly in lost work time.

Depression is one of the most common mental illnesses. At least 8 percent of adults in the United States experience serious depression at some point during their lives, and estimates range as high as 17 percent. The illness affects all people, regardless of sex, race, ethnicity, or socioeconomic standing. However, women are two to three times more likely than men to suffer from depression. Experts disagree on the reason for this difference. Some cite differences in hormones, and others point to the stress caused by society’s expectations of women.

Studies indicate that depression is more prevalent among women than it is among men. Genetics and environment seem to be the keys to unlocking this gender ~ gap mystery, although the complexity of the puzzle makes progress slow. In this article for Scientific American Presents, physician Ellen Leibenluft explores the physiology of depression and explains how scientific research may make it possible to develop better treatments for both sexes.

Depression occurs in all parts of the world, although the pattern of symptoms can vary. The prevalence of depression in other countries varies widely, from 1.5 percent of people in Taiwan to 19 percent of people in Lebanon. Some researchers believe methods of gathering data on depression account for different rates.

A number of large ~ scale studies indicate that depression rates have increased worldwide over the past several decades. Furthermore, younger generations are experiencing depression at an earlier age than did previous generations. Social scientists have proposed many explanations, including changes in family structure, urbanization, and reduced cultural and religious influences.

Although it may appear anytime from childhood to old age, depression usually begins during a person’s 20s or 30s. The illness may come on slowly, then deepen gradually over months or years. On the other hand, it may erupt suddenly in a few weeks or days. A person who develops severe depression may appear so confused, frightened, and unbalanced that observers speak of a “nervous breakdown.” However it begins, depression causes serious changes in a person’s feelings and outlook. A person with major depression feels sad nearly every day and may cry often. People, work, and activities that used to bring them pleasure no longer do.

Symptoms of depression can vary by age. In younger children, depression may include physical complaints, such as stomachaches and headaches, as well as irritability, “moping around,” social withdrawal, and changes in eating habits. They may feel unenthusiastic about school and other activities. In adolescents, common symptoms include sad mood, sleep disturbances, and lack of energy. Elderly people with depression usually complain of physical rather than emotional problems, which sometimes leads doctors to misdiagnose the illness.

Symptoms of depression can also vary by culture. In some cultures, depressed people may not experience sadness or guilt but may complain of physical problems. In Mediterranean cultures, for example, depressed people may complain of headaches or nerves. In Asian cultures they may complain of weakness, fatigue, or imbalance.

If left untreated, an episode of major depression typically lasts eight or nine months. About 85 percent of people who experience one bout of depression will experience future episodes.

Depression usually alters a person’s appetite, sometimes increasing it, but usually reducing it. Sleep habits often change as well. People with depression may oversleep or, more commonly, sleep for fewer hours. A depressed person might go to sleep at midnight, sleep restlessly, then wake up at 5 am feeling tired and blue. For many depressed people, early morning is the saddest time of the day.

Depression also changes one’s energy level. Some depressed people may be restless and agitated, engaging in fidgety movements and pacing. Others may feel sluggish and inactive, experiencing great fatigue, lack of energy, and a feeling of being worn out or carrying a heavy burden. Depressed people may also have difficulty thinking, poor concentration, and problems with memory.

People with depression often experience feelings of worthlessness, helplessness, guilt, and self ~ blame. They may interpret a minor failing on their part as a sign of incompetence or interpret minor criticism as condemnation. Some depressed people complain of being spiritually or morally dead. The mirror seems to reflect someone ugly and repulsive. Even a competent and decent person may feel deficient, cruel, stupid, phony, or guilty of having deceived others. People with major depression may experience such extreme emotional pain that they consider or attempt suicide. At least 15 percent of seriously depressed people commit suicide, and many more attempt it.

In some cases, people with depression may experience psychotic symptoms, such as delusions (false beliefs) and hallucinations (false sensory perceptions). Psychotic symptoms indicate an especially severe illness. Compared with other depressed people, those with psychotic symptoms have longer hospital stays, and after leaving, they are more likely to be moody and unhappy. They are also more likely to commit suicide. Some depressions seem to come out of the blue, even when things are going well. Others seem to have an obvious cause: a marital conflict, financial difficulty, or some personal failure. Yet many people with these problems do not become deeply depressed. Most psychologists believe depression results from an interaction between stressful life events and a person’s biological and psychological vulnerabilities.

Clinical depression is one of the most common forms of mental illness. Although depression can be treated with psychotherapy, many scientists believe there are biological causes for the disease. In this June 1998 Scientific American article, neurobiologist Charles B. Nemeroff discusses the connection between biochemical changes in the brain and depression.

Depression runs in families. By studying twins, researchers have found evidence of a strong genetic influence in depression. Genetically identical twins raised in the same environment are three times more likely to have depression in common than fraternal twins, who have only about half of their genes in common. In addition, identical twins are five times more likely to have bipolar disorder in common. These findings suggest that vulnerability to depression and bipolar disorder can be inherited. Adoption studies have provided more evidence of a genetic role in depression. These studies show that children of depressed people are vulnerable to depression even when raised by adoptive parents.

Genes may influence depression by causing abnormal activity in the brain. Studies have shown that certain brain chemicals called neurotransmitters play an important role in regulating moods and emotions. Neurotransmitters involved in depression include norepinephrine, dopamine, and serotonin. Research in the 1960s suggested that depression results from lower than normal levels of these neurotransmitters in parts of the brain. Support for this theory came from the effects of antidepressant drugs, which work by increasing the levels of neurotransmitters involved in depression. However, later studies have discredited this simple explanation and have suggested a more complex relationship between neurotransmitter levels and depression.

An imbalance of hormones may also play a role in depression. Many depressed people have higher than normal levels of hydrocortisone (cortisol), a hormone secreted by the adrenal gland in response to stress. In addition, an underactive or overactive thyroid gland can lead to depression.

A variety of medical conditions can cause depression. These include dietary deficiences in vitamin B6, vitamin B12, and folic acid; degenerative neurological disorders, such as Alzheimer’s disease and Huntington’s disease; strokes in the frontal part of the brain; and certain viral infections, such as hepatitis and mononucleosis. Certain medications, such as steroids, may also cause depression.

Psychological theories of depression focus on the way people think and behave. In a 1917 essay, Austrian psychoanalyst Sigmund Freud explained melancholia, or major depression, as a response to loss—either real loss, such as the death of a spouse, or symbolic loss, such as the failure to achieve an important goal. Freud believed that a person’s unconscious anger over loss weakens the ego, resulting in self ~ hate and self ~ destructive behaviour.

Cognitive theories of depression emphasize the role of irrational thought processes. American psychiatrist Aaron Beck proposed that depressed people tend to view themselves, their environment, and the future in a negative light because of errors in thinking. These errors include focussing on the negative aspects of any situation, misinterpreting facts in negative ways, and blaming themselves for any misfortune. In Beck’s view, people learn these self ~ defeating ways of looking at the world during early childhood. This negative thinking makes situations seem much worse than they really are and increases the risk of depression, especially in stressful situations.

In support of this cognitive view, people with “depressive” personality traits appear to be more vulnerable than others to actual depression. Examples of depressive personality traits include gloominess, pessimism, introversion, self ~ criticism, excessive skepticism and criticism of others, deep feelings of inadequacy, and excessive brooding and worrying. In addition, people who regularly behave in dependent, hostile, and impulsive ways appear at greater risk for depression.

American psychologist Martin Seligman proposed that depression stems from “learned helplessness,” an acquired belief that one cannot control the outcome of events. In this view, prolonged exposure to uncontrollable and inescapable events leads to apathy, pessimism, and loss of motivation. An adaptation of this theory by American psychologist Lynn Abramson and her colleagues argues that depression results not only from helplessness, but also from hopelessness. The hopelessness theory attributes depression to a pattern of negative thinking in which people blame themselves for negative life events, view the causes of those events as permanent, and overgeneralize specific weaknesses as applying to many areas of their life.

Psychologists agree that stressful experiences can trigger depression in people who are predisposed to the illness. For example, the death of a loved one may trigger depression. Psychologists usually distinguish true depression from grief, a normal process of mourning a loved one who has died. Other stressful experiences may include divorce, pregnancy, the loss of a job, and even childbirth. About 20 percent of women experience an episode of depression, known as postpartum depression, after having a baby. In addition, people with serious physical illnesses or disabilities often develop depression.

People who experience child abuse appear more vulnerable to depression than others. So, too, do people living under chronically stressful conditions, such as single mothers with many children and little or no support from friends or relatives.

Depression typically cannot be shaken or willed away. An episode must therefore run its course until it weakens either on its own or with treatment. Depression can be treated effectively with antidepressant drugs, psychotherapy, or a combination of both.

Despite the availability of effective treatment, most depressive disorders go untreated and undiagnosed. Studies indicate that general physicians fail to recognize depression in their patients at least half of the time. In addition, many doctors and patients view depression in elderly people as a normal part of aging, even though treatment for depression in older people is usually very effective.

Up to 70 percent of people with depression respond to antidepressant drugs. These medications appear to work by altering the levels of serotonin, norepinephrine, and other neurotransmitters in the brain. They generally take at least two to three weeks to become effective. Doctors cannot predict which type of antidepressant drug will work best for any particular person, so depressed people may need to try several types. Antidepressant drugs are not addictive, but they may produce unwanted side effects. To avoid relapse, people must usually continue taking the medication for several months after their symptoms improve.

Commonly used antidepressant drugs fall into three major classes: tricyclics, monoamine oxidase inhibitors (MAO inhibitors), and selective serotonin reuptake inhibitors (SSRIs). Tricyclics, named for their three ~ ring chemical structure, include amitriptyline (Elavil), imipramine (Tofanil), desipramine (Norpramin), doxepin (Sinequan), and nortriptyline (Pamelor). Side effects of tricyclics may include drowsiness, dizziness upon standing, blurred vision, nausea, insomnia, constipation, and dry mouth.

MAO inhibitors include isocarboxazid (Marplan), phenelzine (Nardil), and tranylcypromine (Parnate). People who take MAO inhibitors must follow a diet that excludes tyramine ~ a substance found in wine, beer, some cheeses, and many fermented foods ~ to avoid a dangerous rise in blood pressure. In addition, MAO inhibitors have many of the same side effects as tricyclics.

Selective serotonin reuptake inhibitors include fluoxetine (Prozac), sertraline (Zoloft), and paroxetine (Paxil). These drugs generally produce fewer and milder side effects than do other types of antidepressants, although SSRIs may cause anxiety, insomnia, drowsiness, headaches, and sexual dysfunction. Some patients have alleged that Prozac causes violent or suicidal behaviour in a small number of cases, but the US Food and Drug Administration has failed to substantiate this claim.

Prozac became the most widely used antidepressant in the world soon after its introduction in the late 1980s by drug manufacturer Eli Lilly and Company. Many people find Prozac extremely effective in lifting depression. In addition, some people have reported that Prozac actually transform their personality by increasing their self ~ confidence, optimism, and energy level. However, mental health professionals have expressed serious ethical concerns over Prozac’s use as a “personality enhancer,” especially among people without clinical depression.

Doctors often prescribe lithium carbonate, a natural mineral salt, to treat people with bipolar disorder (see Lithium). People often take lithium during periods of relatively normal mood to delay or even prevent subsequent mood swings. Side effects of lithium include nausea, stomach upset, vertigo, and frequent urination.

Studies have shown that short ~ term psychotherapy can relieve mild to moderate depression as effectively as antidepressant drugs. Unlike medication, psychotherapy produces no physiological side effects. In addition, depressed people treated with psychotherapy appear less likely to experience a relapse than those treated only with antidepressant medication. However, psychotherapy usually takes longer to produce benefits.

There are many kinds of psychotherapy. Cognitive ~ behavioural therapy assumes that depression stems from negative, often irrational thinking about oneself and one’s future. In this type of therapy, a person learns to understand and eventually eliminate those habits of negative thinking. In interpersonal therapy, the therapist helps a person resolve problems in relationships with others that may have caused the depression. The subsequent improvement in social relationships and support helps alleviate the depression. Psychodynamic therapy views depression as the result of internal, unconscious conflicts. Psychodynamic therapists focus on a person’s past experiences and the resolution of childhood conflicts. Psychoanalysis is an example of this type of therapy. Critics of long ~ term psychodynamic therapy argue that its effectiveness is scientifically unproven.

A woman sits in front of a high ~ intensity light box as part of treatment for seasonal affective disorder. People with this disorder experience episodes of depression that usually begin during the winter months. Daily exposure that bright light helps prevent or lift depression for many people with the disorder.

Electroconvulsive therapy (ECT) can often relieve severe depression in people who fail to respond to antidepressant medication and psychotherapy. In this type of therapy, a low ~ voltage electric current is passed through the brain for one to two seconds to produce a controlled seizure. Patients usually receive six to ten ECT treatments over several weeks. ECT remains controversial because it can cause disorientation and memory loss. Nevertheless, research has found it highly effective in alleviating severe depression.

For milder cases of depression, regular aerobic exercise may improve mood as effectively as psychotherapy or medication. In addition, some research indicates that dietary modifications can influence one’s mood by changing the level of serotonin in the brain.

Antidepressant, medication used to treat depression, a mood disorder characterized by such symptoms as sadness, decreased appetite, difficulty sleeping, fatigue, and a lack of enjoyment of activities previously found pleasurable. While everyone experiences episodes of sadness at some point in their lives, depression is distinguished from this sadness when symptoms are present most days for a period of at least two weeks. Antidepressants are often the first choice of treatment for depression.

Although the cause of depression is unknown, researchers have found that some depressed people have altered levels of chemicals called neurotransmitters, chemicals made and released by nerve cells, or neurons. One neuron, referred to as the presynaptic neuron, releases a neurotransmitter into the synapse, or space, between the neuron and a neighbouring cell. The neurotransmitter then attaches, or binds, to a neighbouring cell ~ the postsynaptic cell ~ to trigger a specific activity. Antidepressants work by interacting with neurotransmitters at three different points: they can change the rate at which the neurotransmitters are either created or broken down by the body; they can block the process in which a spent neurotransmitter is recycled by a presynaptic neuron and used again, called reuptake; or they can interfere with the binding of a neurotransmitter to neighbouring cells.

The first antidepressants, developed in the 1950s, are the tricyclic antidepressants (TCA) and the monoamine oxidase (MAO) inhibitors. TCAs block the reuptake of neurotransmitters into the presynaptic neurons, keeping the neurotransmitter in the synapse longer, and making more of the neurotransmitter available to the postsynaptic cell. TCAs include amitriptyline, doxepin, imipramine, nortriptyline, and desipramine.

MAO inhibitors decrease the rate at which neurotransmitters are broken down by the body so they are more available to interact with neurons. MAO inhibitors currently available in the United States include phenelzine and tranylcypromine.

Another group of antidepressants, known as selective serotonin reuptake inhibitors (SSRI), became available in 1987. SSRIs block the reuptake of the neurotransmitter serotonin into presynaptic neurons, thereby prolonging its activity. There are currently four SSRIs available for use in the United States: fluoxetine, sertraline, paroxetine, and fluvoxamine. Of this group, the best known is fluoxetine, commonly known by its brand name, Prozac.

Another antidepressant is venlafaxine, which works like TCAs but does not share their chemical structure, and it also causes different side effects. The antidepressant nefazodone prevents serotonin from binding to neighbouring neurons at one specific binding site (serotonin can bind to neurons on many sites). It also weakly blocks the reuptake of serotonin.

All antidepressants decrease symptoms of depression in about 70 percent of depressed people who take them. Most antidepressants take about two to three weeks of treatment before beneficial effects occur. Because no antidepressant is more effective than the others, doctors determine which antidepressant to prescribe according to the type of side effects an individual can tolerate. For instance, a person who takes TCAs and MAO inhibitors may notice dizziness and fainting when standing up, mouth dryness, difficulty urinating, constipation, and drowsiness. If people who take MAO inhibitors eat certain foods, such as aged cheese or some aged meats, they can experience severe headaches and raised blood pressure. SSRIs can cause side effects such as restlessness, difficulty sleeping, and interference with sexual function.

Schizophrenia is a severe form that finds its home within the mental illness that is characterized by a variety of symptoms, including loss of contact with reality, bizarre behaviour, disorganized thinking and speech, decreased emotional expressiveness, and social withdrawal. Usually only some of these symptoms occur in any one person. The term schizophrenia comes from Greek words meaning “split mind.” However, contrary to common belief, schizophrenia does not refer to a person with a split personality or multiple personality. To observers, schizophrenia may seem like madness or insanity.

People with schizophrenia have disturbing, frightening thoughts and may have trouble telling the difference between real and unreal experiences. A 25 ~ year ~ old woman with schizophrenia drew this picture, which was displayed as part of a 1968 exhibition of works by psychiatric patients.

Perhaps more than any other mental illness, schizophrenia has a debilitating effect on the lives of the people who suffer from it. A person with schizophrenia may have difficulty telling the difference between real and unreal experiences, logical and illogical thoughts, or appropriate and inappropriate behaviour. Schizophrenia seriously impairs a person’s ability to work, go to school, enjoy relationships with others, or take care of oneself. In addition, people with schizophrenia frequently require hospitalization because they pose a danger to themselves. About 10 percent of people with schizophrenia commit suicide, and many others attempt suicide. Once people develop schizophrenia, they usually suffer from the illness for the rest of their lives. Although there is no cure, treatment can help many people with schizophrenia lead productive lives.

Schizophrenia also carries an enormous cost to society. People with schizophrenia occupy about one ~ third of all beds in psychiatric hospitals in the United States. In addition, people with schizophrenia account for at least 10 percent of the homeless population in the United States. The National Institute of Mental Health has estimated that schizophrenia costs the United States tens of billions of dollars each year in direct treatment, social services, and lost productivity.

Approximately 1 percent of people develop schizophrenia at some time during their lives. Experts estimate that about 1.8 million people in the United States have schizophrenia. The prevalence of schizophrenia is the same regardless of sex, race, and culture. Although women are just as likely as men to develop schizophrenia, women tend to experience the illness less severely, with fewer hospitalizations and better social functioning in the community.

Schizophrenia usually develops in late adolescence or early adulthood, between the ages of 15 and 30. Much less commonly, schizophrenia develops later in life. The illness may begin abruptly, but it usually develops slowly over months or years. Mental health professionals diagnose schizophrenia based on an interview with the patient in which they determine whether the person has experienced specific symptoms of the illness.

Symptoms and functioning in people with schizophrenia tend to vary over time, sometimes worsening and other times improving. For many patients the symptoms gradually become less severe as they grow older. About 25 percent of people with schizophrenia become symptom ~ free later in their lives.

A variety of symptoms characterize schizophrenia. The most prominent include symptoms of psychosis—such as delusions and hallucinations—as well as bizarre behaviour, strange movements, and disorganized thinking and speech. Many people with schizophrenia do not recognize that their mental functioning is disturbed.

Some people with schizophrenia experience delusions of persecution—false beliefs that other people are plotting against them. This interview between a patient with schizophrenia and his therapist illustrates the paranoia that can affect people with this illness.

Delusions are false beliefs that appear obviously untrue to other people. For example, a person with schizophrenia may believe that he is the king of England when he is not. People with schizophrenia may have delusions that others, such as the police or the FBI, are plotting against them or spying on them. They may believe that aliens are controlling their thoughts or that their own thoughts are being broadcast to the world so that other people can hear them.

People with schizophrenia may also experience hallucinations (false sensory perceptions). People with hallucinations see, hear, smell, feel, or taste things that are not really there. Auditory hallucinations, such as hearing voices when no one else is around, are especially common in schizophrenia. These hallucinations may include two or more voices conversing with each other, voices that continually comment on the person’s life, or voices that command the person to do something.

People with schizophrenia often behave bizarrely. They may talk to themselves, walk backward, laugh suddenly without explanation, make funny faces, or masturbate in public. In rare cases, they maintain a rigid, bizarre pose for hours on end. Alternately, they may engage in constant random or repetitive movements.

People with schizophrenia sometimes talk in incoherent or nonsensical ways, which suggests confused or disorganized thinking. In conversation they may jump from topic to topic or string together loosely associated phrases. They may combine words and phrases in meaningless ways or make up new words. In addition, they may show poverty of speech, in which they talk less and more slowly than other people, fail to answer questions or reply only briefly, or suddenly stop talking in the middle of speech.

Another common characteristic of schizophrenia is social withdrawal. People with schizophrenia may avoid others or act as though others do not exist. They often show decreased emotional expressiveness. For example, they may talk in a low, monotonous voice, avoid eye contact with others, and display a blank facial expression. They may also have difficulties experiencing pleasure and may lack interest in participating in activities.

Other symptoms of schizophrenia include difficulties with memory, attention span, abstract thinking, and planning ahead. People with schizophrenia commonly have problems with anxiety, depression, and suicidal thoughts. In addition, people with schizophrenia are much more likely to abuse or become dependent upon drugs or alcohol than other people. The use of alcohol and drugs often worsens the symptoms of schizophrenia, resulting in relapses and hospitalizations.

Schizophrenia appears to result not from a single cause, but from a variety of factors. Most scientists believe that schizophrenia is a biological disease caused by genetic factors, an imbalance of chemicals in the brain, structural brain abnormalities, or abnormalities in the prenatal environment. In addition, stressful life events may contribute to the development of schizophrenia in those who are predisposed to the illness.

Research shows that the more genetically related a person is to someone with schizophrenia, the greater the risk that person has of developing the illness. For example, children of one parent with schizophrenia have a 13 percent chance of developing the illness, whereas children of two parents with schizophrenia have a 46 percent chance of developing the disorder.

Research suggests that the genes one inherits strongly influence one’s risk of developing schizophrenia. Studies of families have shown that the more closely one is related to someone with schizophrenia, the greater the risk one has of developing the illness. For example, the children of one parent with schizophrenia have about a 13 percent chance of developing the illness, and children of two parents with schizophrenia have about a 46 percent chance of eventually developing schizophrenia. This increased risk occurs even when such children are adopted and raised by mentally healthy parents. In comparison, children in the general population have only about a 1 percent chance of developing schizophrenia.

Some evidence suggests that schizophrenia may result from an imbalance of chemicals in the brain called neurotransmitters. These chemicals enable neurons (brain cells) to communicate with each other. Some scientists suggest that schizophrenia result from the excessive activities of the neurotransmitter dopamine in certain parts of the brain or from an abnormal sensitivity to dopamine. Support for this hypothesis comes from antipsychotic drugs, which reduce psychotic symptoms in schizophrenia by blocking brain receptors for dopamine. In addition, amphetamines, which increase dopamine activity, intensify psychotic symptoms in people with schizophrenia. Despite these findings, many experts believe that excess dopamine activity alone cannot account for schizophrenia. Other neurotransmitters, such as serotonin and norepinephrine, may play important roles as well.

Magnetic resonance imaging (MRI) reveals structural differences between a normal adult brain, left, and the brain of a person with schizophrenia, right. The schizophrenic brain has enlarged ventricles (fluid ~ filled cavities), shown in light gray. However, not all people with schizophrenia show this abnormality.

Brain imaging techniques, such as magnetic resonance imaging and positron ~ emission tomography, have led researchers to discover specific structural abnormalities in the brains of people with schizophrenia. For example, people with chronic schizophrenia tend to have enlarged brain ventricles (cavities in the brain that contain cerebrospinal fluid). They also have a smaller overall volume of brain tissue compared to mentally healthy people. Other people with schizophrenia show abnormally low activity in the frontal lobe of the brain, which governs abstract thought, planning, and judgment. Research has identified possible abnormalities in many other parts of the brain, including the temporal lobes, basal ganglia, thalamus, hippocampus, and superior temporal gyrus. These defects may partially explain the abnormal thoughts, perceptions, and behaviours that characterize schizophrenia.

Evidence suggests that factors in the prenatal environment and during birth can increase the risk of a person later developing schizophrenia. These events are believed to affect the brain development of the fetus during a critical period. For example, pregnant women who have been exposed to the influenza virus or who have poor nutrition have a slightly increased chance of giving birth to a child who later develops schizophrenia. In addition, obstetric complications during the birth of a child ~ for example, delivery with forceps ~ can slightly increase the chances of the child later developing schizophrenia.

Although scientists favour a biological cause of schizophrenia, stress in the environment may affect the onset and course of the illness. Stressful life circumstances ~ such as growing up and living in poverty, the death of a loved one, an important change in jobs or relationships, or chronic tension and hostility at home ~ can increase the chances of schizophrenia in a person biologically predisposed to the disease. In addition, stressful events can trigger a relapse of symptoms in a person who already has the illness. Individuals who have effective skills for managing stress may be less susceptible to its negative effects. Psychological and social rehabilitation can help patients develop more effective skills for dealing with stress.

Although there is no cure for schizophrenia, effective treatment exists that can improve the long ~ term course of the illness. With many years of treatment and rehabilitation, significant numbers of people with schizophrenia experience partial or full remission of their symptoms.

Treatment of schizophrenia usually involves a combination of medication, rehabilitation, and treatment of other problems the person may have. Antipsychotic drugs (also called neuroleptics) are the most frequently used medications for treatment of schizophrenia. Psychological and social rehabilitation programs may help people with schizophrenia function in the community and reduce stress related to their symptoms. Treatment of secondary problems, such as substance abuse and infectious diseases, is also an important part of an overall treatment program.

Antipsychotic medications, developed in the mid ~ 1950s, can dramatically improve the quality of life for people with schizophrenia. The drugs reduce or eliminate psychotic symptoms such as hallucinations and delusions. The medications can also help prevent these symptoms from returning. Common antipsychotic drugs include risperidone (Risperdal), olanzapine (Zyprexa), clozapine (Clozaril), quetiapine (Seroquel), haloperidol (Haldol), thioridazine (Mellaril), chlorpromazine (Thorazine), fluphenazine (Prolixin), and trifluoperazine (Stelazine). People with schizophrenia must usually take medication for the rest of their lives to control psychotic symptoms. Antipsychotic medications appear to be less effective at treating other symptoms of schizophrenia, such as social withdrawal and apathy.

Antipsychotic drugs help reduce symptoms in 80 to 90 percent of people with schizophrenia. However, those who benefit often stop taking medication because they do not understand that they are ill or because of unpleasant side effects. Minor side effects include weight gain, dry mouth, blurred vision, restlessness, constipation, dizziness, and drowsiness. Other side effects are more serious and debilitating. These may include muscle spasms or cramps, tremors, and tardive dyskinesia, an irreversible condition marked by uncontrollable movements of the lips, mouth, and tongue. Newer drugs, such as clozapine, olanzapine, risperidone, and quetiapine, tend to produce fewer of these side effects. However, clozapine can cause agranulocytosis, a significant reduction in white blood cells necessary to fight infections. This condition can be fatal if not detected early enough. For this reason, people taking clozapine must have weekly tests to monitor their blood.

Because many patients with schizophrenia continue to experience difficulties despite taking medication, psychological and social rehabilitation is often necessary. A variety of methods can be effective. Social skills training helps people with schizophrenia learn specific behaviours for functioning in society, such as making friends, purchasing items at a store, or initiating conversations. Behavioural training methods can also help them learn self ~ care skills such as personal hygiene, money management, and proper nutrition. In addition, cognitive ~ behavioural therapy, a type of psychotherapy, can help reduce persistent symptoms such as hallucinations, delusions, and social withdrawal.

Family intervention programs can also benefit people with schizophrenia. These programs focus on helping family members understand the nature and treatment of schizophrenia, how to monitor the illness, and how to help the patient make progress toward personal goals and greater independence. They can also lower the stress experienced by everyone in the family and help prevent the patient from relapsing or being re ~ hospitalized.

Because many patients have difficulty obtaining or keeping jobs, supported employment programs that help patients find and maintain jobs are a helpful part of rehabilitation. In these programs, the patient works alongside people without disabilities and earns competitive wages. An employment specialist (or vocational specialist) helps the person maintain their job by, for example, training the person in specific skills, helping the employer accommodate the person, arranging transportation, and monitoring performance. These programs are most effective when the supported employment is closely integrated with other aspects of treatment, such as medication and monitoring of symptoms.

Some people with schizophrenia are vulnerable to frequent crises because they do not regularly go to mental health centres to receive the treatment they need. These individuals often relapse and face re ~ hospitalization. To ensure that such patients take their medication and receive appropriate psychological and social rehabilitation, assertive community treatment (ACT) programs have been developed that deliver treatment to patients in natural settings, such as in their homes, in restaurants, or on the street.

People with schizophrenia often have other medical problems, so an effective treatment program must attend to these as well. One of the most common associated problems is substance abuse. Successful treatment of substance abuse in patients with schizophrenia requires careful coordination with their mental health care, so that the same clinicians are treating both disorders at the same time.

The high rate of substance abuse in patients with schizophrenia contributes to a high prevalence of infectious diseases, including hepatitis B and C and the human immunodeficiency virus (HIV). Assessment, education, and treatment or management of these illnesses is critical for the long ~ term health of patients.

Other problems frequently associated with schizophrenia include housing instability and homelessness, legal problems, violence, trauma and post ~ traumatic stress disorder, anxiety, depression, and suicide attempts. Close monitoring and psychotherapeutic interventions are often helpful in addressing these problems.

Several other psychiatric disorders are closely related to schizophrenia. In schizoaffective disorder, a person shows symptoms of schizophrenia combined with either mania or severe depression. Schizophreniform disorder refers to an illness in which a person experiences schizophrenic symptoms for more than one month but fewer than six months. In schizotypal personality disorder, a person engages in odd thinking, speech, and behaviour, but usually does not lose contact with reality. Mental health professionals refer to these disorders together as schizophrenia ~ spectrum disorders.

We now turn to personality disorders, disorders in which one’s personality results in personal distress or significantly impairs social or work functioning. Every person has a personality—that is, a characteristic way of thinking, feeling, behaving, and relating to others. Most people experience at least some difficulties and problems that result from their personality. The specific point at which those problems justify the diagnosis of a personality disorder seems controversial. To some extent the definitions of a personality disorder are arbitrary, reflecting subjective as well as professional judgments about the person’s degree of dysfunction, need for change, and motivation for change.

Personality disorders involve behaviour that deviates from the norms or expectations of one’s culture. However, people who deviate from cultural norms are not necessarily dysfunctional, nor are people who conform to cultural norms necessarily healthy. Many personality disorders represent extreme variants of behaviour patterns that people usually value and encourage. For example, most people value confidence but not arrogance, agreeableness but not submissiveness, and conscientiousness but not perfectionism.

Because no clear line exists between healthy and unhealthy functioning, critics question the reliability of personality disorder diagnoses. A behaviour that seems deviant to one person may seem normal to another depending on one’s gender, ethnicity, and cultural background. The personal and cultural biases of mental health professionals may influence their diagnoses of personality disorders.

An estimated 20 percent of people in the general population have one or more personality disorders. Some people with personality disorders have other mental illnesses as well. About 50 percent of people who are treated for any psychiatric disorder have a personality disorder.

Mental health professionals rarely diagnose personality disorders in children because their manner of thinking, feeling, and relating to others does not usually stabilize until young adulthood. Thereafter, personality traits usually remain stable. Personality disorders often decrease in severity as a person ages.

People with antisocial personality disorder act in a way that disregards the feelings and rights of other people. Antisocial personalities often break the law, and they may use or exploit other people for their own gain. They may lie repeatedly, act impulsively, and get into physical fights. They may mistreat their spouses, neglect or abuse their children, and exploit their employees. They may even kill other people. People with this disorder are also sometimes called sociopaths or psychopaths. Antisocial behaviour in people less than 18 years old is called conduct disorder.

Antisocial personalities usually fail to understand that their behaviour is dysfunctional because their ability to feel guilty, remorseful, and anxious is impaired. Guilt, remorse, shame, and anxiety are unpleasant feelings, but they are also necessary for social functioning and even physical survival. For example, people who lack the ability to feel anxious will often fail to anticipate actual dangers and risks. They may take chances that other people would not take.

Antisocial personality disorder affects about 3 percent of males and 1 percent of females. This is the most heavily researched personality disorder, in part because it costs society the most. People with this disorder are at high risk for premature and violent death, injury, imprisonment, loss of employment, bankruptcy, alcoholism, drug dependence, and failed personal relationships.

People with borderline personality disorder experience intense emotional instability, particularly in relationships with others. They may make frantic efforts to avoid real or imagined abandonment by others. They may experience minor problems as major crises. They may also express their anger, frustration, and dismay through suicidal gestures, self ~ mutilation, and other self ~ destructive acts. They tend to have an unstable self ~ image or sense of self.

As children, most people with this disorder were emotionally unstable, impulsive, and often bitter or angry, although their chaotic impulsiveness and intense emotions may have made them popular at school. At first they may impress people as stimulating and exciting, but their relationships tend to be unstable and explosive.

About 2 percent of all people have borderline personality disorder. About 75 percent of people with this disorder are female. Borderline personalities are at high risk for developing depression, alcoholism, drug dependence, bulimia, disassociative disorders, and post ~ traumatic stress disorder. As many as 10 percent of people with this disorder commit suicide by the age of 30. People with borderline personality disorder are among the most difficult to treat with psychotherapy, in part because their relationship with their therapist may become as intense and unstable as their other personal relationships.

Avoidant personality disorder is social withdrawal due to intense, anxious shyness. People with Avoidant personalities are reluctant to interact with others unless they feel certain of being liked. They fear being criticized and rejected. Often they view themselves as socially inept and inferior to others.

Dependent personality disorder involves severe and disabling emotional dependency on others. People with this disorder have difficulty making decisions without a great deal of advice and reassurance from others. They urgently seek out another relationship when a close relationship ends. They feel uncomfortable by themselves.

People with histrionic personality disorder constantly strive to be the centre of attention. They may act overly flirtatious or dress in ways that draw attention. They may also talk in a dramatic or theatrical style and display exaggerated emotional reactions.

People with narcissistic personality disorder have a grandiose sense of self ~ importance. They seek excessive admiration from others and fantasize about unlimited success or power. They believe they are special, unique, or superior to others. However, they often have very fragile self ~ esteem.

Obsessive ~ compulsive personality disorder is characterized by a preoccupation with details, orderliness, perfection, and control. People with this disorder often devote excessive amounts of time to work and productivity and fail to take time for leisure activities and friendships. They tend to be rigid, formal, stubborn, and serious. This disorder differs from obsessive ~ compulsive disorder, which often includes more bizarre behaviour and rituals.

People with paranoid personality disorder feel constant suspicion and distrust toward other people. They believe that others are against them and constantly look for evidence to support their suspicions. They are hostile toward others and react angrily to perceived insults.

Schizoid personality disorder involves social isolation and a lack of desire for close personal relationships. People with this disorder prefer to be alone and seem withdrawn and emotionally detached. They seem indifferent to praise or criticism from other people.

People with schizotypal personality disorder engage in odd thinking, speech, and behaviour. They may ramble or use words and phrases in unusual ways, and they may believe they have magical control over others. They feel very uncomfortable with close personal relationships and tend to be suspicious of others. Some research suggests this disorder is a less severe form of schizophrenia.

Many psychiatrists and psychologists use two additional diagnoses. Depressive personality disorder is characterized by chronic pessimism, gloominess, and cheerlessness. In passive ~ aggressive personality disorder, a person passively resists completing tasks and chores, criticizes and scorns authority figures, and seems negative and sullen.

Personality disorders result from a complex interaction of inherited traits and life experience, not from a single cause. For example, some cases of antisocial personality disorder may result from a combination of a genetic predisposition to impulsiveness and violence, very inconsistent or erratic parenting, and a harsh environment that discourages feelings of empathy and warmth but rewards exploitation and aggressiveness. Borderline personality disorder may result from a genetic predisposition to impulsiveness and emotional instability combined with parental neglect, intense marital conflicts between parents, and repeated episodes of severe emotional or sexual abuse. Dependent personality disorder may result from genetically based anxiety, an inhibited temperament, and overly protective, clinging, or neglectful parenting.

The pervasive and chronic nature of personality disorders makes them difficult to treat. People with these disorders often fail to recognize that their personality has contributed to their social, occupational, and personal problems. They may not think they have any real problems despite a history of drug abuse, failed relationships, and irregular employment. Thus, therapists must first focus on helping the person understand and become aware of the significance of their personality traits.

People with personality disorders sometimes feel that they can never change their dysfunctional behaviour because they have always acted the same way. Although personality change is exceedingly difficult, sometimes people can change the most dysfunctional aspects of their feelings and behaviour.

Therapists use a variety of methods to treat personality disorders, depending on the specific disorder. For example, cognitive and behavioural techniques, such as role playing and logical argument, may help alter a person’s irrational perceptions and assumptions about himself or herself. Certain psychoactive drugs may help control feelings of anxiety, depression, or severe distortions of thought. Psychotherapy may help people to understand the impact of experiences and relationships during childhood.

Psychotherapy is usually ineffective for people with antisocial personality disorder because these individuals tend to be manipulative, unreliable, and dishonest with the therapist. Therefore, most mental health professionals favour removing people with this disorder from their current living situation and placing them in a residential treatment centre. Such residential programs strictly supervise patients’ behaviour and impose rigid, consistent rules and responsibilities. These programs appear to help some people, but it is unclear how long their beneficial effects last.

Therapists treating people with borderline personality disorder sometimes use a technique called dialectical behaviour therapy. In this type of therapy, the therapist initially focuses on reducing suicidal tendencies and other behaviours that disrupt treatment. The therapist then helps the person develop skills to cope with anger and self ~ destructive impulses. In addition, the person learns to achieve personal strength through an acceptance of the many disappointments and interpersonal conflicts that are a natural part of life

Psychoactive Drugs, chemical substances that alter mood, behaviour, perception, or mental functioning. Throughout history, many cultures have found ways to alter consciousness through the ingestion of substances. In current professional practice, psychoactive substances known as psychotropic drugs have been developed to treat patients with severe mental illness.

Psychoactive substances exert their effects by modifying biochemical or physiological processes in the brain. The message system of nerve cells, or neurons, relies on both electrical and chemical transmission. Neurons rarely touch each other; the microscopic gap between one neuron and the next, called the synapse, is bridged by chemicals called neuroregulators, or neurotransmitters. Psychoactive drugs act by altering neurotransmitter function. The drugs can be divided into six major pharmacological classes based on their desired behavioural or psychological effect: alcohol, sedative ~ hypnotics, narcotic analgesics, stimulant ~ euphoriants, hallucinogens, and psychotropic agents.

Alcohol has always been the most widely used psychoactive substance. In most countries it is the only psychoactive drug legally available without prescription. Pleasant relaxation is commonly the desired effect, but intoxication impairs judgment and motor performance. When used chronically, alcohol can be toxic to liver and brain cells and can be physiologically addicting, producing dangerous withdrawal syndromes.

Sedative ~ hypnotics, such as the barbiturates and diazepam (widely known under the brand name Valium), include brain depressants, which are used medically to help people sleep (sleeping pills), and antianxiety agents, which are used to calm people without inducing sleep. Sedative ~ hypnotics are used illegally to produce relaxation, tranquillity, and euphoria. Overdoses of sedative ~ hypnotics can be fatal; all can be physiologically addicting, and some can cause a life ~ threatening withdrawal syndrome.

Narcotic analgesics-opiates such as morphine and heroin—are prescribed to produce analgesia. Because the relief of pain is one of the primary tasks of medical treatment, opiates have been among the most important and valuable drugs in medicine. Illegal use of narcotic analgesics involves injecting these substances, particularly heroin, into the veins to produce euphoria. Opiates are physiologically addicting and can produce a quite unpleasant withdrawal syndrome.

Stimulant ~ euphoriants, such as amphetamines, are prescribed by physicians to suppress the appetite and to treat children often diagnosed as hyperactive. Although amphetamines stimulate adults, they have a paradoxically calming effect on certain children who have short attention spans and are hyperactive. Cocaine is used medically as a local anesthetic. Amphetamines and cocaine are used illegally to produce alertness and euphoria, to prevent drowsiness, and to improve performance in physical and mental tasks such as athletic events and college examinations.

Hallucinogens ~ psychedelic drugs such as LSD, mescaline, and PCP - thus far have little medical use. They are taken illegally to alter perception and thinking patterns. Marijuana is a weak hallucinogen that may be medically useful in suppressing the nausea caused by cancer treatments and possibly in reducing eye pressure in certain severe glaucomas.

Psychotropic drugs have been in use since the early 1950s. Antipsychotic drugs decrease the symptoms of schizophrenia, allowing many schizophrenic patients to leave the hospital and rejoin community life. Antidepressant drugs help the majority of patients with severe depression recover from their disorder. Lithium salts eliminate or diminish the episodes of mania and depression experienced by manic ~ depressive patients.

Lithium, in its gross effect, is a mood stabilizer and derived as a silvery white, chemically reactive metallic element that is the lightest in weight of all metals. In group 1 (or Ia) of the periodic table, lithium is one of the alkali metals. The atomic number of lithium is 3.

Discovery of the element is generally credited to Johann A. Arfvedson in 1817. Chemically, lithium resembles sodium in its behaviour. Lithium is obtained by the electrolysis of a mixture of fused lithium and potassium chloride. It tarnishes instantaneously and corrodes rapidly upon exposure to air; when it is stored it must be immersed in a liquid such as naphtha. Lithium ranks 35th in order of abundance of the elements in Earth’s crust. It does not occur in nature in the free state but only in compounds, which are widely distributed. The metal is used as a deoxidizer and to remove unwanted gases during the manufacture of nonferrous castings. Lithium vapour is used to prevent carbon dioxide and oxygen from forming scale in furnaces in heat ~ treating steel. Important compounds of lithium include the hydroxide, used for bonding carbon dioxide in the ventilator systems of spacecraft and submarines; and the hydride, used to inflate lifeboats, and its heavy hydrogen (deuterium) equivalent, used in making the hydrogen bomb. Lithium carbonate, a common mineral, is used in the treatment of bipolar disorder and some forms of depression.

Lithium melts at about 181°C (about 358°F), boils at about 1342°C (about 2448°F), and has a specific gravity of 0.53. The atomic weight of lithium.

Hysteria may arise in the types of mental illness, in which emotionally laden mental conflicts appear as physical symptoms, called conversion reactions, or as severe mental dissociation. In modern psychological classification, hysteria is known as somatization disorder or conversion disorder, depending on the specific symptoms displayed. Psychiatric diagnosis of hysteria depends on recognition of a mental conflict and of the unconscious connections between conflict and symptoms. The term mass hysteria is applied to situations in which a large group of people exhibit the same kinds of physical symptoms with no organic cause. For example, one incident of mass hysteria reported in 1977 involved 57 members of a high school marching band who experienced headache, nausea, dizziness, and fainting after a football game. After a fruitless search for organic causes, researchers concluded that a heat reaction among a few band members had spread by emotional suggestion to other members of the band. The term collective stress reaction is now preferred for these situations.

French neurologist Jean Martin Charcot shows colleagues a female patient with hysteria at La Salpêtrière, a Paris hospital. Charcot gained renown throughout Europe for his method of treating hysteria and other “nervous disorders” through hypnosis. Charcot’s belief that hysteria had psychological rather than physical origins influenced Austrian neurologist Sigmund Freud, who studied under Charcot.

Under the stress of mental conflict, anyone may react temporarily with physical symptoms. In conversion reactions, mental conflicts are unconsciously converted to symptoms that appear to be physical, but no organic cause is found. Common symptoms of conversion reactions include muscular paralysis, blindness, deafness, and tremors.

Patients with conversion reactions may have periods of intense emotion and defective power of self ~ observation. In such a mental condition, patients may interact with others in a bizarre way. Extreme symptoms of dissociation are shown in disassociative fugue, in which a person forgets his or her identity and unexpectedly wanders away from home.

The ancient Greeks accounted for the instability and mobility of physical symptoms and of attacks of emotional disturbance in women, when these were otherwise unaccountable, by a theory that the womb somehow became transplanted to different positions. This “wandering of the uterus” theory gave the name hysteria (Greek hystera, “uterus”) to disease phenomena characterized by highly emotional behaviour. During the Middle Ages hysteria was attributed to demonic possession and to witchcraft, which led to persecution.

As the sciences of anatomy and physiology developed in the 19th century, a tendency to interpret all mental phenomena in terms of diseased structure of the brain became apparent in medical circles. At the end of the 19th century, however, the French neurologist Jean Martin Charcot demonstrated that morbid ideas could produce physical manifestations. Subsequently his pupil, the French psychologist Pierre Janet, formulated a description of hysteria as a psychological disorder. Later Austrian psychoanalyst Sigmund Freud began to develop the theory that hysterical symptoms are the result of conflict between the social and ethical standards of an individual and an unsuccessfully repressed wish.

Modern treatment of hysteria consists of some form of psychotherapy and, in some cases, prolonged forms of analytic psychotherapy, or of psychoanalysis. For cases of acute hysteria associated with anxiety, tranquillizing medication may also be necessary.

Somatoform disorders are characterized by the presence of physical symptoms that cannot be explained by a medical condition or another mental illness. Thus, physicians often judge that such symptoms result from psychological conflicts or distress. For example, in conversion disorder, also called hysteria, a person may experience blindness, deafness, or seizures, but a physician cannot find anything wrong with the person. People with another somatoform disorder, hypochondriasis, constantly fear that they will develop a serious disease and misinterpret minor physical symptoms as evidence of illness. The term somatoform comes from the Greek word soma, meaning “body.”

Franz Anton Mesmer (1734 ~ 1815) an Austrian physician is known for inducing a trancelike state, called mesmerism, as a curative agent. Mesmer was born near Konstanz, Germany, and educated at Vienna University. About 1772 he asserted the existence of a power, similar to magnetism, that exercises an extraordinary influence on the human body. This power he called animal magnetism, and in 1775 he published an account of his discovery, claiming that it had medicinal value. Mesmer successfully used his new system, which was a type of hypnotism, to cure patients. His technique received some support among members of the medical profession. In 1785 the French government was induced to appoint an investigative commission composed of physicians and scientists, but the committee's report was unfavorable to Mesmer's theory. Mesmer subsequently fell into disrepute and spent the rest of his life in obscurity. Since Mesmer's day the subject has been elevated from the domain of charlatanism to that of scientific research. The mesmeric trance is today identified as hypnosis, and its value in the management of certain medical conditions has been widely recognized.

Much of much, is that hypnosis has some unduly influence over some altered state of consciousness and heightened responsiveness to suggestion; it may be induced in normal persons by a variety of methods and has been used occasionally in medical and psychiatric treatment. Most frequently hypnosis is brought about through the actions of an operator, the hypnotist, who engages the attention of a subject and assigns certain tasks to him or her while uttering monotonous, repetitive verbal commands; such tasks may include muscle relaxation, eye fixation, and arm levitation. Hypnosis also may be self ~ induced, by trained relaxation, concentration on one's own breathing, or by a variety of monotonous practices and rituals that are found in many mystical, philosophical, and religious systems.

Hypnosis results in the gradual assumption by the subject of a state of consciousness in which attention is withdrawn from the outside world and is concentrated on mental, sensory, and physiological experiences. When a hypnotist induces a trance, a close relationship or rapport develops between operator and subject. The responses of subjects in the trance state, and the phenomena or behaviour they manifest objectively, are the product of their motivational set; that is, behaviour reflects what is being sought from the experience.

Most people can be easily hypnotized, but the depth of the trance varies widely. A profound trance is characterized by a forgetting of trance events and by an ability to respond automatically to posthypnotic suggestions that are not too anxiety ~ provoking. The depth of trance achievable is a relatively fixed characteristic, dependent on the emotional condition of the subject and on the skill of the hypnotist. Only 20 percent of subjects are capable of entering somnambulistic states through the usual methods of induction. Medically, this percentage is not significant, since therapeutic effects occur even in a light trance.

Hypnosis can produce a deeper contact with one's emotional life, resulting in some lifting of repressions and exposure of buried fears and conflicts. This effect potentially lends itself to medical and educational use, but it also lends itself to misinterpretation. Thus, the revival through hypnosis of early, forgotten memories may be fused with fantasies. Research into hypnotically induced memories in recent years has in fact stressed their uncertain reliability. For this reason a number of state court systems in the US have placed increasing constraints on the use of evidence hypnotically obtained from witnesses, although most states still permit its introduction in court.

Hypnosis has been used to treat a variety of physiological and behavioural problems. It can alleviate back pain and pain resulting from burns and cancer. It has been used by some obstetricians as the sole analgesia for normal childbirth. Hypnosis is sometimes also employed to treat physical problems with a possible psychological component, such as Raynaud's syndrome (a circulatory disease) and faecal incontinence in children. Researchers have demonstrated that the benefit of hypnosis is greater than the effect of a placebo and probably results from changing the focus of attention. Few physicians, however, include hypnosis as part of their practice.

Some behavioural difficulties, such as cigarette smoking, overeating, and insomnia, are also amenable to resolution through hypnosis. Nonetheless, most psychiatrists think that fundamental psychiatric illness is better treated with the patient in a normal state of consciousness.

The Swiss psychiatrist, who founded the analytical school of psychology, Jung broadened Sigmund Freud's psychoanalytical approach, interpreting mental and emotional disturbances as an attempt to find personal and spiritual wholeness.

Born on July 26, 1875, in Kesswil, Switzerland, the son of a Protestant clergyman, Jung developed during his lonely childhood an inclination for dreaming and fantasy that greatly influenced his adult work. After graduating in medicine in 1902 from the universities of Basel and Zürich, with a wide background in biology, zoology, paleontology, and archaeology, he began his work on word association, in which a patient's responses to stimulus words revealed what Jung called “complexes”—a term that has since become universal. These studies brought him international renown and led him to a close collaboration with Freud. With the publication of Psychology of the Unconscious (1912; trans. 1916), however, Jung declared his independence from Freud's narrowly sexual interpretation of the libido by showing the close parallels between ancient myths and psychotic fantasies and by explaining human motivation in terms of a larger creative energy. He gave up the presidency of the International Psychoanalytic Society and co ~ founded a movement called analytical psychology.

During his remaining 50 years Jung developed his theories, drawing on a wide knowledge of mythology and history; travels to diverse cultures in New Mexico, India, and Kenya; and especially the dreams and fantasies of his childhood. In 1921 he published a major work, Psychological Types (trans. 1923), in which he dealt with the relationship between the conscious and unconscious and proposed the now well ~ known personality types, extrovert and introvert. He later made a distinction between the personal unconscious, or the repressed feelings and thoughts developed during an individual's life, and the collective unconscious, or those inherited feelings, thoughts, and memories shared by all humanity. The collective unconscious, according to Jung, is made up of what he called “archetypes,” or primordial images. These correspond to such experiences as confronting death or choosing a mate and manifest themselves symbolically in religions, myths, fairy tales, and fantasies.

Jung's therapeutic approach aimed at reconciling the diverse states of personality, which he saw divided not only into the opposites of introvert and extrovert, but also into those of sensing and intuiting, and of feeling and thinking. By understanding how the personal unconscious integrates with the collective unconscious, Jung theorized, a patient can achieve a state of individuation, or wholeness of self.

Jung wrote voluminously, especially on analytical methods and the relationships between psychotherapy and religious belief. He died on June 6, 1961, in Küsnacht.

Alfred Adler (1870 ~ 1937), an Austrian psychologist and psychiatrist, born in Vienna, and educated at Vienna University. After leaving the university he studied and was associated with Sigmund Freud, the founder of psychoanalysis. In 1911 Adler left the orthodox psychoanalytic school to found a neo ~ Freudian school of psychoanalysis. After 1926 he was a visiting professor at Columbia University, and in 1935 he and his family moved to the United States.

In his analysis of individual development, Adler stressed the sense of inferiority, rather than sexual drives, as the motivating force in human life. According to Adler, conscious or subconscious feelings of inferiority (to which he gave the name inferiority complex), combined with compensatory defence mechanisms, are the basic causes of psychopathological behaviour. The function of the psychoanalyst, furthermore, is to discover and rationalize such feelings and break down the compensatory, neurotic will for power that they engender in the patient. Adler's works include The Theory and Practice of Individual Psychology (1918) and The Pattern of Life (1930).

Friedrich Nietzsche (1844 ~ 1900), German philosopher, poet, and classical philologist, who was one of the most provocative and influential thinkers of the 19th century. Nietzsche founded his morality on what he saw as the most basic human drive, the will to power. Nietzsche criticized Christianity and other philosophers’ moral systems as “slave moralities” because, in his view, they chained all members of society with universal rules of ethics. Nietzsche offered, in contrast, a “master morality” that prized the creative influence of powerful individuals who transcended the common rules of society.

One of the most controversial works of 19th ~ century philosophy, Thus Spake Zarathustra (1883 ~ 1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche was born in Röcken, Prussia. His father, a Lutheran minister, died when Nietzsche was five, and Nietzsche was raised by his mother in a home that included his grandmother, two aunts, and a sister. He studied classical philology at the universities of Bonn and Leipzig and was appointed professor of classical philology at the University of Basel at the age of 24. Ill health (he was plagued throughout his life by poor eyesight and migraine headaches) forced his retirement in 1879. Ten years later he suffered a mental breakdown from which he never recovered. He died in Weimar in 1900.

In addition to the influence of Greek culture, particularly the philosophies of Plato and Aristotle, Nietzsche was influenced by German philosopher Arthur Schopenhauer, by the theory of evolution, and by his friendship with German composer Richard Wagner.

Nietzsche’s first major work, Die Geburt der Tragedies aus dem Geiste de Musik (The Birth of Tragedy), appeared in 1872. His most prolific period as an author was the 1880s. During the decade he wrote Also sprach Zarathustra (Parts I ~ III, 1883 ~ 1884; Part IV, 1885; translated as Thus Spake Zarathustra); Jenseits von Gut und Böse (1886; Beyond Good and Evil); Zur Genealogie de Moral (1887; On the Genealogy of Morals); Der Antichrist (1888; The Antichrist); and Ecce Homo (completed 1888, published 1908). Nietzsche’s last major work, The Will to Power (Der Wille zur Macht), was published in 1901.

One of Nietzsche’s fundamental contentions was that traditional values (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation “God is dead.” He was convinced that traditional values represented a “slave morality,” a morality created by weak and resentful individuals who encouraged such behaviour as gentleness and kindness because the behaviour served their interests. Nietzsche claimed that new values could be created to replace the traditional ones, and his discussion of the possibility led to his concept of the overman or superman.

According to Nietzsche, the masses (whom he termed the herd or mob) conform to tradition, whereas his ideal overman is secure, independent, and highly individualistic. The overman feels deeply, but his passions are rationally controlled. Concentrating on the real world, rather than on the rewards of the next world promised by religion, the overman affirms life, including the suffering and pain that accompany human existence. Nietzsche’s overman is a creator of values, a creator of a “master morality” that reflects the strength and independence of one who is liberated from all values, except those that he deems valid.

Nietzsche maintained that all human behaviour is motivated by the will to power. In its positive sense, the will to power is not simply power over others, but the power over oneself that is necessary for creativity. Such power is manifested in the overman's independence, creativity, and originality. Although Nietzsche explicitly denied that any overmen had yet arisen, he mentions several individuals who could serve as models. Among these models he lists Jesus, Greek philosopher Socrates, Florentine thinker Leonardo da Vinci, Italian artist Michelangelo, English playwright William Shakespeare, German author Johann Wolfgang von Goethe, Roman ruler Julius Caesar, and French emperor Napoleon I.

The concept of the overman has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche's work.

An acclaimed poet, Nietzsche exerted much influence on German literature, as well as on French literature and theology. His concepts have been discussed and elaborated upon by such individuals as German philosophers Karl Jaspers and Martin Heidegger, and German Jewish philosopher Martin Buber, German American theologian Paul Tillich, and French writers Albert Camus and Jean-Paul Sartre. After World War II (1939-1945), American theologians Thomas J.J. Altizer and Paul Van Buren seized upon Nietzsche's proclamation “God is dead” in their attempt to make Christianity relevant to its believers in the 1960s and 1970s.

Those aforementioned all at some time were imparting of knowledge or power as they were affiliated with Sigmund Freud, who was born in Freiberg (now Pųķbor, Czech Republic), on May 6, 1856, and educated at Vienna University. When he was three years old his family, fleeing from the anti-Semitic riots then raging in Freiberg, moved to Leipzig. Shortly thereafter, the family settled in Vienna, where Freud remained for most of his life.

Although Freud’s ambition from childhood had been a career in law, he decided to become a medical student shortly before he entered Vienna University in 1873. Inspired by the scientific investigations of the German poet Goethe, Freud was driven by an intense desire to study natural science and to solve some of the challenging problems confronting contemporary scientists.

In his third year at the university Freud began research work on the central nervous system in the physiological laboratory under the direction of the German physician Ernst Wilhelm von Brücke. Neurological research was so engrossing that Freud neglected the prescribed courses and as a result remained in medical school three years longer than was required normally to qualify as a physician. In 1881, after completing a year of compulsory military service, he received his medical degree. Unwilling to give up his experimental work, however, he remained at the university as a demonstrator in the physiological laboratory. In 1883, at Brücke’s urging, he reluctantly abandoned theoretical research to gain practical experience.

Freud spent three years at the General Hospital of Vienna, devoting himself successively to psychiatry, dermatology, and nervous diseases. In 1885, following his appointment as a lecturer in neuropathology at Vienna University, he left his post at the hospital. Later the same year he was awarded a government grant enabling him to spend 19 weeks in Paris as a student of the French neurologist Jean Charcot. Charcot, who was the director of the clinic at the mental hospital, the Salpêtrière, was then treating nervous disorders by the use of hypnotic suggestion. Freud’s studies under Charcot, which entered largely on hysteria, influenced him greatly in channeling his interests to psychopathology.

In 1886 Freud established a private practice in Vienna specializing in nervous disease. He met with violent opposition from the Viennese medical profession because of his strong support of Charcot’s unorthodox views on hysteria and hypnotherapy. The resentment he incurred was to delay any acceptance of his subsequent findings on the origin of neurosis.

In 1909 pioneers of the growing psychoanalytic movement assembled at Clark University to hear lectures by Sigmund Freud, the founder of psychoanalysis. The group included, top row, left to right, A.A. Brill, Ernest Jones, Sandor Ferenczi, and bottom row, Freud, Clark University President C. Stanley Hall, and Swiss psychiatrist Carl G. Jung. The visit, the only one Freud made to the United States, broadened the influence and popularity of psychoanalysis.

Freud’s first published work, On Aphasia, appeared in 1891; it was a study of the neurological disorder in which the ability to pronounce words or to name common objects is lost as a result of organic brain disease. His final work in neurology, an article, “Infantile Cerebral Paralysis,” was written in 1897 for an Encyclopédie only at the insistence of the editor, since by this time Freud was occupied largely with psychological rather than physiological explanations for mental illnesses. His subsequent writings were devoted entirely to that field, which he had named psychoanalysis in 1896.

Freud’s new orientation was heralded by his collaborative work on hysteria with the Viennese physician Josef Breuer. The work was presented in 1893 in a preliminary paper and two years later in an expanded form under the title Studies on Hysteria. In this work the symptoms of hysteria were ascribed to manifestations of undischarged emotional energy associated with forgotten psychic traumas. The therapeutic procedure involved the use of a hypnotic state in which the patient was led to recall and reenact the traumatic experience, thus discharging by catharsis the emotions causing the symptoms. The publication of this work marked the beginning of psychoanalytic theory formulated on the basis of clinical observations.

During the period from 1895 to 1900 Freud developed many of the concepts that were later incorporated into psychoanalytic practice and doctrine. Soon after publishing the studies on hysteria he abandoned the use of hypnosis as a cathartic procedure and substituted the investigation of the patient’s spontaneous flow of thoughts, called free association, to reveal the unconscious mental processes at the root of the neurotic disturbance.

In his clinical observations Freud found evidence for the mental mechanisms of repression and resistance. He described repression as a device operating unconsciously to make the memory of painful or threatening events inaccessible to the conscious mind. Resistance is defined as the unconscious defence against awareness of repressed experiences in order to avoid the resulting anxiety. He traced the operation of unconscious processes, using the free associations of the patient to guide him in the interpretation of dreams and slips of speech. Dream analysis led to his discoveries of infantile sexuality and of the so-called Oedipus complex, which constitutes the erotic attachment of the child for the parent of the opposite sex, together with hostile feelings toward the other parent. In these years he also developed the theory of transference, the process by which emotional attitudes, established originally toward parental figures in childhood, are transferred in later life to others. The end of this period was marked by the appearance of Freud’s most important work, The Interpretation of Dreams (1899). Here Freud analysed many of his own dreams recorded in the 3-year period of his self-analysis, begun in 1897. This work expounds all the fundamental concepts underlying psychoanalytic technique and doctrine.

In 1902 Freud was appointed a full professor at Vienna University. This honour was granted not in recognition of his contributions but as a result of the efforts of a highly influential patient. The medical world still regarded his work with hostility, and his next writings, The Psychopathology of Everyday Life (1904) and Three Contributions to the Sexual Theory (1905), only increased this antagonism. As a result Freud continued to work virtually alone in what he termed “splendid isolation.”

By 1906, however, a small number of pupils and followers had gathered around Freud, including the Austrian psychiatrists William Stekel and Alfred Adler, the Austrian psychologist Otto Rank, the American psychiatrist Abraham Brill, and the Swiss psychiatrists Eugen Bleuler and Carl Jung. Other notable associates, who joined the circle in 1908, were the Hungarian psychiatrist Sándor Ferenczi and the British psychiatrist Ernest Jones.

Austrian doctor Sigmund Freud spent many hours refining his theories in this study of his home in Vienna, Austria. Freud pioneered the use of clinical observation to treat mental disease. The publication of The Interpretation of Dreams in 1899 detailed his technique of isolating the source of psychological problems by examining a patient’s spontaneous stream of thought.

Increasing recognition of the psychoanalytic movement made possible the formation in 1910 of a worldwide organization called the International Psychoanalytic Association. As the movement spread, gaining new adherents through Europe and the US, Freud was troubled by the dissension that arose among members of his original circle. Most disturbing were the defections from the group of Adler and Jung, each of whom developed a different theoretical basis for disagreement with Freud’s emphasis on the sexual origin of neurosis. Freud met these setbacks by developing further his basic concepts and by elaborating his own views in many publications and lectures.

After the onset of World War I Freud devoted little time to clinical observation and concentrated on the application of his theories to the interpretation of religion, mythology, art, and literature. In 1923 he was stricken with cancer of the jaw, which necessitated constant, painful treatment in addition to many surgical operations. Despite his physical suffering he continued his literary activity for the next 16 years, writing mostly on cultural and philosophical problems.

When the Germans occupied Austria in 1938, Freud, a Jew, was persuaded by friends to escape with his family to England. He died in London on September 23, 1939.

Freud created an entirely new approach to the understanding of human personality by his demonstration of the existence and force of the unconscious. In addition, he founded a new medical discipline and formulated basic therapeutic procedures that in modified form are applied widely in the present-day treatment of neuroses and psychoses. Although never accorded full recognition during his lifetime, Freud is generally acknowledged as one of the great creative minds of modern times.

Among his other works are Totem and Taboo (1913), Ego and the Id (1923), New Introductory Lectures on Psychoanalysis (1933), and Moses and Monotheism (1939).

Nonetheless, what is spoken between philosophies one might come to realize of Existentialism, for which is accredited as a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.

All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that it is spiritually crucial to recognize that one experiences not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Nineteenth-century Danish philosopher Søren Kierkegaard played a major role in the development of existentialist thought. Kierkegaard criticized the popular systematic method of rational philosophy advocated by German Georg Wilhelm Friedrich Hegel. He emphasized the absurdity inherent in human life and questioned how any systematic philosophy could apply to the ambiguous human condition. In Kierkegaard’s deliberately unsystematic works, he explained that each individual should attempt an intense examination of his or her own existence.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a “leap of faith” into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.

Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.

Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed in her

Twentieth-century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”

The opening lines of Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864)—“I am a sick man.… I am a spiteful man”—are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.

open sidebar

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

In the treatise Being and Nothingness, French writer Jean-Paul Sartre presents his existential philosophical framework. He reasons that the essential nothingness of human existence leaves individuals to take sole responsibility for their own actions. Shunning the morality and constraints of society, individuals must embrace personal responsibility to craft a world for themselves. Along with focussing on the importance of exercising individual responsibility, Sartre stresses that the understanding of freedom of choice is the only means of authenticating human existence. A novelist and playwright as well as a philosopher, Sartre will become a leader of the modern existentialist movement.

The more we are understood, and concede to holism as an inescapable condition of our physical existence, according to theory, each individual of the system in a certain sense, at any-one time, exists simultaneously in every part of the space occupied by the system. Its physical reality must be described as to continuous functions in space. The material point, therefore, can hardly be anticipated any more than the basic concept of theory

A human being is part of the whole, and he experiences himself, his thoughts and feelings as something separate from the rest - a kind of optical illusion of his consciousness. This delusion is kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty. Nobody could achieve this completely, but the striving for such achievement is in itself a part of the liberation and a foundation for inner security.

The more the universe seems comprehensible, the more it seems pointless, just as life is merely a disease of matter, and, so, I think, any attempts to preserve this view not only require metaphysical leaps that result in unacceptable levels of ambiguity. They also fail to meet the requirement that testability is necessary to confirm the validity of all theoretical undertakings.

From its start, the languages of biblical literature were equally valid sources of communion with the eternal and immutable truths exiting in the mind of God, yet the extant documents alone were consisted with more than a million words in his own hand, and some of his speculations seem quite bizarre by contemporary standards, least of mention, they deem of a sacred union with which on the appearance of an unexamined article of faith, expanding our worship upon some altar of an unknown god.

Our inherent consciousness, as the corpses of times generations were to evolve, having distributively confirmed a striking unity, as unified consciousness can take more than one form, it is, nonetheless that when we are consciously aware that we are capably communicable of a conscious content, in a dialectic awareness of several conscious states that seem alternatively assembled.

As no assumption can be taken for granted, and no thoughtful conclusion should be lightly dismissed as fallacious in studying the phenomenon of consciousness. Nonetheless, in which of exercising intellectual humanity and caution we must try to move ahead to reach some positive conclusion upon its topic.

Our consciousness shows us of a striking unity, as unified consciousness can take more than one form, it is, nonetheless that when we are consciously aware that we are capably communicable of a conscious content, in a dialectic awareness of several conscious states that seem alternatively assembled. Mental states have related us interconnectively as given among us. Because, I am aware not of 'A' and the separability of 'B' and independence of 'C', but am dialectically aware of 'A-and-B-and-C', simultaneously, or better, as all parts of the content of a single conscious state. Since from the time of Kant, persuading with reason that sets itself the good use the faculty arising to engage the attaining knowledge to be considered the name designated as this phenomenon imposes and responds to definite qualities as the ‘unity of consciousness’.

Historically, the notion of the unity of consciousness has played a very large role in thought about the mind. Of this point in fact, it figured centrally in most influential arguments about the mind from the time of Descartes to the 20th century. In the early part of the 20th century, the notion largely disappeared for a time. Analytic philosophers began to pay attention to it again only in the 1960s. We unstretchingly distribute some arranging affirmation that this history subsisted up til the nineteen-hundreds. At that point, we should delineate the unity of consciousness more carefully and examine some evidence from neuropsychology because both are necessary to understand the recent work on the issue.

Descartes would assert that if the partialities existent to the parts have not constructed the constituents parts that decide of its partiality, it cannot contain matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. Recognizing where it is that I cannot distinguish any parts, as accumulatively collected through Unified consciousness is that, it may, have converged of itself.

Directly of another, moderately compound argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will, no where will there be a consciousness of the whole sentence.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add the premise, that if whatever makes up the pursuing situations that mind is considered the supreme reality and have the ultimate means. Tenably to assert the creation from a speculative assumption that bestows to its beginning that makes inherent descendabilities the value of existence for embracing the mind of matter. A variable takes the corresponding definitive criteria of possibilities in value accorded with reality, and would have distributed the relational states that consciousness expresses over some group of components in some relevant way. Still, this thought experiment is meant to show that they cannot so distribute conscious states. Therefore, the conscious mind is not made out of matter, recounting the argument that James is attending the use of the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be persuasively influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could explain anything about the nature of the mind, including whether or not it is made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work. He sets out to do just of something that is or should be perceived as real nor present to the senses the image so formed can confront and deal with reality by using the derivative powers of mind, as in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781) (paralogisms are faulty inferences about the nature of the mind). The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components be no less mysterious than its being achieved by a system of components acting together. Remarkably enough, although no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea that unified consciousness could not be realized by any system of components, and for an even stronger reason any system of material components, had a strong intuitive appeal for a long time.

Again, the historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, even in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.

A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, in fact, not made out of matter at all). It starts like this: When I consider the mind, which is say, myself because I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.

Descartes, asserts that if the mind is not made up of parts, it cannot be made of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. Recognizing where it is that I cannot distinguish any parts, because on account that unified consciousness has initiated upon me.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nonetheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using here the Unity Argument. Clearly, the idea that our consciousness of or for itself, the parts of a sentence that are unified are at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could explain anything about the nature of the mind, including whether or not it is made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781) (paralogisms are faulty inferences about the nature of the mind). Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components be no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument til the 20th century. It may be a bit difficult for us to capture this now but the idea that unified consciousness could not be realized by any system of components, and for an even stronger reason any system of material components, had a strong intuitive appeal for a long time.

The notion that consciousness is unified was also central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to restriction of the various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal conceptual concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence statuses are represented in an experience.

It was relational concept that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on ‘the secure path upon the paradigms of science’. The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being construed either as a myth or of something that we cannot and do not need to study in a science of the human persons. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness was the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

Theory itself, is consistent with fact or reality, not false or incorrect, but truthful, it is sincerely felt or expressed unforeignly to the essential and exact confronting of rules and senses a governing standard, as stapled or fitted in sensing the definitive criteria of narrowedly particularized possibilities in value as taken by a variable accord with reality. To position of something, as to make it balanced, level or square, that we may think of a proper alignment as something, in so, that one is certain, like trust, another derivation of the same appears on the name is etymologically, or ‘strong seers’. Conformity of fact or actuality of a statement been or accepted as true to an original or standard set theory of which is considered the supreme reality and to have the ultimate meaning, and value of existence. Nonetheless, a compound position, such as a conjunction or negation, whose they the truth-values always determined by the truth-values of the component thesis.

Moreover, science, unswerving exactly to positions of something very well hidden, its nature in so that to make it believed, is quickly and imposes on sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that might be gainfully to employ the aggregate of things as possessing actuality, existence, or essence. In other words, in that which objectively and in fact do seem as to be about reality, in fact, the satisfying factions of instinctual needs through awareness of and adjustment to environmental demands. Thus, the act of realizing or the condition of being realized is first, and utmost the resulting infraction of realizing.

Nonetheless, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying fact or cause, that provide logical sense for a premise or occurrence for logical, rational. According to the act/object analysis of experience, every experience with content involves an object of experience to which an act has related the subject of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless, appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theories may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly. Private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the later, but has also been used as a general term for objects of sense experiences). Act/object theorists may also differ on the relationship between objects of experience and objects of perception, as for representative realism, objects of perception (of which e are ‘indirectly aware’) are always distinct from objects of experience (of which we are ‘directly aware’). Meinongians, however, can simply treat objects of perception as existing objects of experience

The act/object analysis faces several problems concerning the status of objects of experience. Currently the most common view is that the question of whether two subjects are experiencing the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. Nevertheless, as to the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory. It could be positive on other versions of the act/object analysis, depending on the facts of the case.)

Reassuringly, the phenomenological argument is not, on reflection, convincing, for granting that any experience appears to present us with an object without accepting that it actually does is easy enough. The semantic argument seemingly relational structure of attributions of experience is a challenge dealt within connection with the adverbial theory. Apparent reference to quantification over objects of experience can be handled b analysing them as reference to experiences themselves tacitly typed according to content.

Analytic mental states have long since been lost in reason. Yet, the premise usually the minor premises, of an argument, use the faculty of reason that arises to engage in dialogue or deliberation. That is, of complete dialectic awareness. To determining or conclude by logical thinking out a solution to the problem, would therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince ‘us’ of its veracity. Still, intuitively there is perceptively welcomed by way of comprehension, as the truth or fact, without the use of the rational process, as one comes to assessing someone’s character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.

Governing by or being according to reason or sound thinking, in that a reasonable solution to the problem, may as well, in being without bounds of common sense and arriving to some reasonable and fair use of reason, especially to form conclusions, inferences or judgements. In that, all evidential altercates of a confronting argument within the usage of thinking or thought out response to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding.

Being or occurring in fact or actualizes as having a verifiable existence, real objects, a real illness and, . . . Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, from which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps attest to this for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience, highly as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world whatever subjectivity or convention of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of the imagination.

In a similar fashion, is that, to certain ‘principles of the imagination’ are what explains our beliefs in a world of enduring objects. Experience alone cannot produce that belief, everything we directly perceive in ‘momentary and fleeting’. Whatever our experience is like, no reasoning could be assured of the existence of something independent of our impressions which continues too careful’ and exist when they cease. The series of our constant ly changing sense impressions present us with observable features which Hume calls ‘constancy’ and ‘coherence’, and these naturally operate on the mind in such way as eventually to produce ‘the opinion of a continued and distinct existence’. The explanation is complicated, but it is meant to appeal only to psychological mechanisms which can be discovered by ‘careful and exact experiments, and the observation of those particular effects, which result from [the mindis] different circumstance’s and situation’.

It is to believe, that not only in bodies, but also in persons, or selves, which continue to exist through time. This belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing, momentary perceptions, not the mind or self which has them. For Hume, nothing really binds the different perceptions together, we are led into the ‘fiction’ that they for a unity only because of the way in which the thoughts of such series of perceptions work upon our minds. ‘The mind is a kind of theatre, where several perceptions successively make their appearance: . . . .There is properly no simplicity in it at one time, nor identity in different, whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind’.

Hume is often described as a sceptic in epistemology, largely because of his rejection of the role of reason, as traditionally understood, in the genesis of our fundamental beliefs. That rejection, although allied to the scepticism of antiquity, is only one part of an otherwise positive general theory of human nature which would explain how and why we think and believe and do all the things we do.

Nevertheless, the Kantian epistemological distinction between as thing as it is in itself, and that thing as appearance, or as it is for us. For Kant the thing in itself is the thing as it is intrinsically, that is, the character of the thing as pat from any relation that it happens to stand. The thing for us, or as an appearance, on the one hand, is the thing insofar as it stands in relation to our cognitive faculties and other objects. ‘Now a thing in itself cannot be known through mere relations: And we may therefor conclude that since outer sense gives us nothing but mere relations, that this sense of containing representation, keeps the relation of an object to the subject, and not the inner properties the object in itself’. Kant applies this distinction to the subject’s cognition of itself. Since the subject can know itself only insofar as it can intuit itself, and it can intuit itself only with temporal relations, and thus as it is related to its own self, it represents itself ‘as it appears to itself, not as it is’. Thus, the distinction between what the subject is in itself and what it is for itself arises in Kant insofar as the distinction between what is applied and the subject’s own knowledge of itself.

Hegel begins the transition of the epistemological distinction between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel what is, as it is in fact or in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact, even for Kant, what the subject is in fact or in itself involve a relation to itself, or self-consciousness. Hegel suggests that the cognition of an entity as for such relations or self- relations do not preclude knowledge of the thing itself. Rather, what an entity intrinsically, or in itself, is best understood for the potentiality of that thing to enter specific explicit relations with itself. Just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself (i.e., to be explicitly self-conscious), the for-itself of any entity is that entity insofar as it is actually related to itself. The distinction between the entity in itself and the entity for itself may thus take to apply to every entity, and no only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant that involves actual relations among the plant’s various organs is the plant ‘for itself’. In Hegel, then, the in itself/for itself distinction becomes universalized, in that it is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are the same entities, the being in itself of the plant, or the plant as potential adult, is ontologically distinct from the being for itself of the plant, or the actual existing mature organism. At which turn, the distinction retains an epistemological dimension in Hegel, although it import is quite different from that of the Kantian distinction. To know a thing it is necessary to know both the actual, explicit self-relations that are the thing (for being in itself of the thing) and the inherent simple principle of these relations, or the being in itself of the thing, real knowledge, for Hegel, thus consists in a knowledge of the thing as it is in and for itself.

Sartre’s distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being intended by consciousness, i.e., being in itself. Being in itself is marked by the total absence of relation, either with itself or with another. On the other hand, what it is for consciousness to be, being for it self, is marked by self-relation. Sartre posits a ‘pre-reflective Cogito’, such that every consciousness of ‘x’ necessarily involves a ‘non-positional’ consciousness of the consciousness of ‘x’. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself insofar as it is related to itself by appearing to itself, and in Hegel every entity can be considered as both in itself and for itself, in Sartre, to be self related for itself is the distinctive oncological mark of consciousness, while to lack relations or to be in itself is the distinctive ontological mark of non-conscious entities.

Ideally, in theory imagination, a concept of reason that is transcendent but nonempirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel’s absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact’, and the reasoning are wrong of the ‘facts’ and ‘substantive facts’, as we may never know the ‘facts’ of the case’. These usages may occasion qualms’ among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or should be.

Essentially, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that form a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principal that guides action or assists comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on conjecture, its philosophy is such to accord, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, demonstrated as true or is assumed to be shown. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Looking back a century, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More inertly taken to place, is the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism’ and ‘idealist’, say, of ‘rationalists’ and ‘empiricist’.

Thus, no matter what the current debate or discussion, the central issue is often without conceptual and contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this overblowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts’, and the investigation of communicable communications, especially the relationship between words and ideas, and words and the world. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is capable of connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease that may be referred to by a term like ‘arthritis’ or the kind of tree referred as a criterial definition of an ‘oak’, of which, horticulturally I know next to nothing. This raises the possibility of imaging two persons in comparatively different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation’ may here include the actual objects they perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of some terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, no matter these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being of narrow content plus context.

All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity as for the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. Nevertheless, they have attacked the model, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that report of introspection, or sensations, or intentions, or beliefs actually play our social lives, to undermine the Cartesian picture that functionally describes the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following’ considerations and the ‘private language argument’ are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism’, about the nature of mental functioning, that occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us’ of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us’. That these programs may or may not improve to conditions that are lastly to enhance of the right a type of existence posted exactly to position of something for its nature in so that make it believed, and imposes on seizing the defensive quality value amounts in humanities lesser extensions that embrace one’s riff of necessity to humanities’ suspension to express in the determined qualities.

As of an ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person’s capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons act by means of a tactic use of a theory that enables one to construct these interpretations as s of their doings. We have commonly held the view along with ‘functionalism’, according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. We may think of theories as capable of formalization, as yielding predictions and, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which we can couch this theory, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory’, enabling ‘us’ to imply what thoughts or intentions explain their actions, but by realizing the situation ‘in their shoes’ or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. We achieve understanding others when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘Verstehen’ traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond’ our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction as to probability.

Some inciting historians feel that there were four contributions to the theory of probability that overshadowed all the rest. The first was the work of Jacob Bernoulli, and the second De Moivre’s, Doctrine of Chances, the third dealt with Bayes’ Inverse Probability, and the fourth was the outstanding work by LaPlace. In fact, it was LaPlace himself who gave the classic ‘definition’ of probability - if an event can result in ‘n’ equally likely outcomes, then the probability of such an event ‘E’ is the ratio of the number of outcomes favourable to ‘E’ to the total number of outcomes.

A process of reasoning in which a conclusion is drawn from a set of premises usually confined to cases in which the conclusions are supposed in following from the premises, i.e., the inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Moreover, as we reason we use the indefinite lore or commonsense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some ‘theories’ usually emerge as a component position, such as those presented by truth-values determined by truth values of a component thesis, in so doing, the imaginative formation of the metal act is or should be perpetuated of the form or actual existence in the mind for being real nor as it presents itself to the senses as for being non-real, supposed truths that are not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory moderately tractable since, in a sense, we have contained all truths in those few. In a theory so organized, we have called the few truths from which we have deductively inferred all others ‘axioms’. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be of investigation.

According to theory, the philosophy of science, is a generalization or set referring to unobservable entities, e.g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory’ refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support of it (merely a theory), current philosophical usage does in fact follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from as few than for being the greater of governing principles, These self-aggrandizing principles, taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused’ by them. When we took the principles as epistemologically prior, that is, as ‘axioms’, we took them to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or’, to such that all truths so indeed follow from them, by deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

The most influential idea in the theory of meaning in the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know of its truth-condition. The conception was first clearly formulated by Frége, was developed in a distinctive way by the early Wittgenstein, and is a leading idea of Davidson. The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions needs not and should not be advanced for being a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally acted by the various types of sentences in the language, and must have some idea of the significance of various kinds of speech acts. The claim of the theorist of truth-conditions should be targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions.

The meaning of a complex expression depends on the meaning of its constituent. This is simply a conscionable statement of what it is for an expression to be semantically complex. It is one initial attraction of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is function of meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of a sentence in which it occurs. For singular terms - proper names, indexicals, and certain pronouns - this is done by stating the reference of the term in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of a complete sentence, as a function of the semantic values of the sentences on which it operates.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is for a person’s language to be truly described by a semantic theory containing a given semantic axiom.

We can take the charge of triviality first, since the content of a claim that the sentence ‘Paris is beautiful’ is true and amounts to no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory that, is somewhat more discriminatory as applying or favouring in their treatment, much as to an inequitable, impartial as swell as to extricate by merely disencumber and disinvolve liberties. Horwich calls the minimal theory of true. The minimal theory states that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’. It is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both the minimal theory of truth and a truth conditional account of meaning. In the claim that the sentence, ‘Paris is beautiful’ is true that Paris is beautiful, it is circular to try to explain the sentence’s meaning under its truth conditions. The minimal theory of truth has been endorsed by Ramsey, Ayer, the later Wittgenstein, Quine, Srawson, Horwich, and -confusingly and inconsistently if this is correct- Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional truth for a given sentence. But in fact it seems that each instance of the equivalence principle can itself be explained. The truth from which such a condition would occur as:

‘London is beautiful’ is true if and only if London is beautiful.

This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But that is very implausible: It is, after all, possible to align itself with the name, ‘London’ without understanding the predicate ‘is beautiful’.

The defender of the minimalist theory is likely to say that if a sentence ‘S’ of a foreign language is best translated by our sentencer ‘p’, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are pervasive in a plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individuating account of any concept that there exits what is called a ‘Determination Theory’ for the account - that is, a specification of how the account contributes to fixing the semantic value of that concept. The notion of a concept’s semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.

It is also plausible that there are general constraints on the form in Determination Theories, constraints that involve truth which is not derivable from the minimalist’s conception. Supposed that concepts are individuated by their possession condition. One such plausible general constraint is then the requirement that when a thinker forms beliefs involving a concept according to its possession condition, a semantic value belief are assigned to the concept so that the belief is true. Some general principles involving truth can indeed, as Horwich has emphasized, be derived from the equivalence schema using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true. This follows logically from the three instances of the equivalence principle: ‘Paris is beautiful and London is beautiful’ is true if and only if Paris is beautiful and London is beautiful: ‘Paris is beautiful’ is true if and only if Paris is beautiful: And ‘London is beautiful’ is true if and only if London is beautiful. But no logical manipulations of the equivalence schema will allow the derivation of that general constraint governing possession conditions, truth and the assignment of semantic values. That constraint can, of course, be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom? This question may be addressed at two depths of generality. At the shallower level, the question may take for granted the person’s possession of the concept of conjunction, and be concerned with what has to be true for the axiom to describe his language correctly. At a deeper level, an answer should not duck the issue of what it is to possess the concept. The answers to both questions are of great interest. When a person mans conjunction by ‘and’, he could not formulate the axiom. Even if he can formulate it, his ability to formulate it is not the casual basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction by sentences he utters containing the word ‘and’. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence involving a particular way of deriving a theorem from a truth theory at some level of unconscious processing? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, a conception has evolved according to which an axiom is true of a person’s language only if there is a common component in the explanation of his understanding of each sentence containing the word ‘and’, a common component that explains why each such sentence is understood as meaning something involving conjunction (Davies, 1987). What conception can also be elaborated in computational terms? It is to suggest for an axiom to be true of a person’s language is for the unconscious mechanisms that produce understanding to draw n the information that a sentence of the form ‘A’ and ‘B’ are true if and only if ‘A’ is true and ‘B’ is true (Peacocke, 1986). Many different algorithms may equally draw on this information. The psychological reality of a semantic theory thus involves, in Marr’s (1982) classification, something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. Thus, its conception of the psychological reality of a semantic theory can be applied to syntactic and phonological theories. Theories in semantics, syntax and phonology are not themselves required to specifically particular algorithms that the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantic, syntactic and phonological theorists are answerable to psychological data, and are potentially refutable by them - for these linguistic theories do make commitments to the information drawn upon by mechanisms in the language user.

This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word translated by the axiom. In this example, the information drawn upon is those sentences of the form ‘A’ and ‘B’ are true if and only if ‘A’ is true and ‘B’ is true. This informational content employs, as I, to am adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’. So the computational answer we have returned needs further elaboration if we are to address the deeper question, which does not want to take for granted possession of the concepts expressed in the language. It is at this point is the theory of linguistic understanding to draw upon a theory of concepts. This is only part of what is involved in the required dovetailing. Given what we have already said about the uniform explanation of the understanding of the various occurrences of a given weird, we should also add that there is a uniform (unconscious, computational) explanation of the language user’s willingness to make the corresponding transition involving the sentence ‘A’ and ‘B’.

Our thinking, and our perceptions of the world about us, is limited by the nature of the language that our culture employs, instead of language possessing, as had previously been widely assumed, much less significant, purely instrumental, function in our living. Human beings do not live in the objective world alone, nor alone in the world social activity as ordinarily understood, but are very much at the mercy of the particular language that has become the medium of expression for their society. It is quite an illusion to imagine that language is merely an incidental means of solving specific problems of communication or reflection. The consideration is that the ‘real world’ is, largely, unconsciously built up on the language habits of the group, we see and hear and otherwise we experience very largely as we do because the language habits of our community predispose certain choices of interpretation.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality’ has still never been articulated satisfactorily, and the nature of the alleged ‘correspondence’ and the alleged ‘reality’ remain objectionably obscure. Yet the familiar alternative suggestions that true beliefs are those that are ‘mutually coherent’, or ‘pragmatically useful’, or ‘verifiable in suitable conditions’ has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true’, distorts its really semantic character, which is not to describe propositions but to endorse them. However, this radical approach is also faced with difficulties and suggests, quasi counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. However, recent work provides some grounds for optimism.

We have based a theory in philosophy of science, is a generalization or set as concerning observable entities, i.e., atoms, quarks, unconscious wish, and so on. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the molecular-kinetic theory refers top molecules and their properties, although an older usage suggests the lack of adequate evidence in support of it (merely a theory), progressive toward its sage; the usage does not carry that connotation. Einstein’s special; Theory of relativity, for example, is considered extremely well founded.

These are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). Under which, some theories usually emerge as exemplifying or occurring in fact, from which are we to find on practical matters and concern of experiencing the real world, nonetheless, that it of supposed truths that are not neatly organized, making the theory difficult to survey or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truths a small number from which all the others can be seen to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth’s in those few. In a theory so organized, they call the few truths from which they deductively infer all others ‘axioms’. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could they be made mathematical objects, so we could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.

Many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, we took them to be entities of such a nature that what exists is ‘caused’ by them. When we took the principles as epistemologically prior, that is, as ‘axioms’, we took them to be either epistemologically privileged, i.e., self-evident, not needing to be demonstrated, or again, inclusive ‘or’, to be such that all truths do indeed follow from them, by means of deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part. Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truths.

No comments:

Post a Comment