January 20, 2010

-page 12-

Expanding the representational domain is something problematic in the very way the imagery controversy, along with other debates over mind and cognition have been set up as a choice between whether humans employ one or two kinds of representational systems. As we know that humans make use of an enormous number of different types of [external] representational systems. These systems differ in form and structure along a variety of syntactic, semantical and other dimensions. It would appear there is no sense in which these various and diverse systems can be divided into two well-specified kinds. Nor does it seem possible to reduced, decode, or capture the cognitive content of all of these forms of representation into sentential symbols. Any adequate theory of mind is going to have to deal with the fact that many more than two types of representation are employed in our cognitive activities, then, to assume that yet-to-be discovered modes of internal representation must fit neatly into one or twp pre-ordained categories.


Appeals to representations play a prominent role in contemporary work in the study of mind. With some justification, most attention has been focussed on language or language-like symbol systems. Even when some non-linguistic systems are countenanced, they tend to be given second-class status. This practice, however, has had a rather constricting affect on our understanding of human cognitive activities. It has, for example, resulted in a lack of serious examination of the function of the arts in organizing the reorganizing our world. And the cognitive uses of metaphor, expression. Exemplification, and the like are typically ignored. Moreover, recognizing that a much broader range of representational systems play a number of philosophical presuppositions and doctrines in the study of mind into question: (1) Claims about the unique of representation as the mark of the mental (2) the identification of contentful or informational states with the sentential of propositional attitudes: (3) The idea that all thought can be expressed in language (4) the assumption that compositional accounts of the structure of language provide the only model we have for the creative or productive nature of representational systems in general, and (5) The tendency to construe all cognitive transitions among representations as cases of inference (based on syntactic or logical form.)

Philosophical issues about perception tend to be issues specifically about sense-perception. It may be supposed that perceptions are related to material objects, but this supposed relationship cannot be understood unless it involves resemblance or likeness between perception and object. But there is no likeness except between perceptions, and, indeed, nothing could be more unlike than something perceived and something unperceived. Therefore, since there is no likeness, the supposed relationship cannot be understood at all, and hence does not exist. Thus the very concept of material objects is to be rejected: We cannot use any concept which cannot be related to our perceptions, the sole basis of our information and knowledge, thus we cannot say that the power which causes our perceptions reside in material objects.

But some power causes our perceptions, since we have very little choice about what we perceive. If that power does not reside in matter or in the perceptions themselves, it must reside in the mind. Since we are minds and are active, we do at least know that minds exist and have powers. The power which creates our perceptions is the power which is supreme in our world and which, evidently, operates by steady and coherent rules (since our perception form a steady and coherent whole). The Irish idealist George Berkeley (1685-1753) identifies this power with the God of Christendom.

The succumbing of problems, as a point of view, for example, to speak of people’s perception of a certain set of events, even though those people have not been to attest of them. I one sense, however, there is nothing new about this: In seventeenth and eighteenth-century philosophical usage, words wider with a coverage than sense-perception alone. It is, nonetheless, of sense-perception that has typically raised the largest and most obvious philosophical problems.

All and all, both rationalist and empiricalist views about the source of knowledge are threatened by arguments tha pose particular problems for empiricism and suggest by the nature and limitations of perception, the best current account of which tells us something like sensory modalities. Philosophers have traditionally held that every proposition has a modal status as well as a truth value. Every proposition is either necessary or contingent as well as either true or false. Necessary truths are ones which must be true, or whose opposite is therefore possible.

1. It is not the case that it is raining and not raining.

2. 2 + 2 = 4

3. All bachelors are unmarried.

4. It seldom rains in the Sahara.

5. There are more than four states in the USA.

6. Some bachelors drive Macerates.

*1-3 are necessary, and 4-6 contingent*

Plantinga (1974) characterizes the sense of necessity illustrated in 1-3, as truths of logic, but those include not only truths of logic, but those of mathematics, set theory, and other quasi-logical ones. Yet, it is not so broad as to include matters of causal or natural necessity, such as,

7. Nothing travels faster than the speed of light.

One would like an account of the basis of our distinction and a criterion by which to apply it. Some suppose that necessary truths are those we know deductively, and there are necessary truths we do not know at all, e.g., undiscovered mathematical ones. It would not help to say that necessary truths are ones it is possible, in the broadly logical sense, to know deductivity, for this is circular. Finally, the American logician and philosopher Aaron Saul Kripke (1940-) made his early reputation as a logical prodigy, especially through work on the completeness of systems of modal logic (1972) as Plantinga )1974) argued that some contingent truths are knowable. Similar problems face he suggestion that necessary truths are the ones we know with certainty: We lack a criterion for certainty, as there are necessary truths we do not know, and (barring dubious arguments or scepticism) it is reasonable to suppose that we know some contingent truths with certainty.

Wilhelm Gottfried Leibniz (1646-1716) defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A’ is A’. ‘AB’ is ‘B’. And so forth) or is reducibility of an identity by successively substituting equivalent terms. (Thus, 3, above, might be so reduced by substituting ‘unmarried man’ for ‘bachelor’.) This has several advantages over the ideas of the previous paragraph. First, it explicates the notion of necessity and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evident logical deductions whose theory implies that all necessary truths are knowable deductively, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certain knowledge of contingent truths by means other than a reduction.

Nevertheless, this view is also problematic. Leibniz’s examples of reductions are too sparse to prove a claim about all necessary truths. Some of his reductions, moreover, are deficient: Frége has pointed out, for example, that his proof of 2 +2 = 4' presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but require s the resources of set theory as well. Finally, there are other necessary propositions, e.g., ‘Nothing can be red and green all over’, and which do not seem to be reducible to identities and which Leibniz does not show how to reduce.

Leibniz and others have thought of truth as a property of propositions, where the latter are conceived as things which may be expressed by, but are distinct from, linguistic items like statements. On another approach, truth is a property of linguistic entities, and the basis of necessary truth is convention. Thus, the English philosopher and left-wing intellectual Ayer Alfred Jules (1910-89), is to example, in arguing that the only necessary truths are analytic statements and that the latter rest entirely ion our commitment to use words in certain ways. But while there have been many attempts to define analyticity. The most influential American philosophers of the latter half of the 20th century Orman Willard van Quine (1908-2000), has criticized the most powerful ones and rendered it uncertain whether a criterion for this notion can be given.

When one predicates necessary truth of a proposition one speaks of modality, for one ascribes the modal property, necessary truth. To a dictum, namely, whatever proposition is taken s necessary. A venerable tradition, however, distinguishes this from necessary, wherein one predicates necessary or essential possession of some property to an object. for example, the statement ‘4 is necessarily greater than 2' might be used to predicate of the object, 4, the property, being necessarily greater than 2. That objects have some of their properties necessarily, or essentially, and others only contingently, or accidentally, is a man part of the doctrine called ‘essentialism’. Thus, an essentialist might say that Socrates had the property of being bald anciently, but that of being self-identical, or perhaps of being human, essentially. Although essentialism has been vigorously attacked in recent years, most particularly by Quine, it also has enabling contemporary proponents, such as Plantinga.

Leibniz declares that there are only two kinds of truths - thuths of reason and truths of fact. The former are all either explicit identities i.e., of the form ‘A’ is ‘A’, ‘AB’ is ‘B’, and so forth, or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident deductive truths, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrateable contradiction. Leibniz also says that truths of reason ‘rest on he principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are ‘All equilateral rectangles are rectangles’ and ‘All bachelors are unmarried’: The first is already of the form ‘AB’ is ‘B

‘ and the latter can be reduced to this form by substituting g ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.

Truths of fact, on the other hand, cannot be reduced to an identity and out only way of knowing them is a posteriori, or by reference of the facts of the empirical world. Likewise, since their denial does not involve a contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be unless there is a reason why it s so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.

In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject, (This holds even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who did not cross the Rubicon, would have been Caesar) And this containment relationship - which is eternal and unalterable even by God - guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, and if they are all necessary, surely they are all truths of reason. Leibniz responds that not every truth can be reduced to an identity in a finite number of steps: In some instances revealing the connection between subject and predicate concepts would require an infinite analysis. But while this may entail that we cannot prove such postpositions a priori, it does not appear to show that the proposition could have been false. Intuitively , it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that thuths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God had the power to create otherwise. Yet God is necessarily good, so how could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether he offers any satisfactory solutions.

A major object of cognitive science is to understand the nature of the abstract representations and computational process responsible for our ability to reason, speak, perceive, and interact with the world. In addition, a commitment to a materialist resolution of the mind-body problem requires that we search for the manner in which these representations and processes are neurally instantiated in the brain. Given this dual aim, one might proceed in one of two ways: (1) From the bottom up, commencing with the study of how low-level information and computations are encoded in the neuroanatomy of the brain, in the hope of working upwards toward an understanding of the properties of high-level cognitive processes: (2) From the tp down, using behavioural data from a variety of sources to provide an abstract characterization of cognitive processes and then utilizing these results to guide our search for the neural mechanisms of cognition. At present, our understanding of the neural basis of higher-level cognition is virtually negligible.

It is sometimes claimed that top-down research into cognition is incapable of producing anything but castles in the sky, as having little relevance to human thought. However, a brief look at the history of science shows us that the study of a phenomenon in the absence of an understanding of its underlying mechanisms is by no means novel and has, moreover, sometimes had significant success. One prominent example is Gregor Mendel’s study of heredity. By observing the external patterns in which properties of pea plants such as colour and height were transmitted from one generation to the next. Mendel was able to deduce the existence of genes and the fundamental laws by which they combine. Of course, Mendel had no conception of the biological character of genes, nor any understanding of the reasons why his combinational laws held. Yet, his results have been substantivally vindicated and formed the impetus for research into the biological basis of genetic material, culminating in Watson and Crick’s discovery of DNA.

How might we mimic Mendel’s methodology (and success) in the study of the mind? That is, how do we go about building an abstract theory of a cognitive process which can form the basis for subsequent neurological investigations? One route starts from, the assumption that there are specialized representational structures which underlie mental processing within each cognitive domain. Just as Mendel’s experience with pea plants led him to propose that heredity is best explained by positing an abstract representation, the gene, along with laws governing its behaviour, so too can we use data from human behaviour to lead us to the discovery of the abstract structures and laws governing cognition.

Within the cognitive sciences, this type of research has its roots within the domain of ‘linguistic theory’, specifically within the area of syntax, the study of sentence structure. Any study of humans’ capacity for natural language syntax must face the age-old challenge of deriving infinite capacity (i.e., we can understand and produce arbitrarily long sentences, including those we have never before heard) from the finite resources provided by our physical endowment. During the first half of this century, work I the foundations of mathematics and computer science led to the development of a variety of mathematical and logical systems that fortuitously provided a formal means for answering this challenge. This led to innovative work by structuralist linguistics such as Zelig Harris, which for the first time provided a set of mathematically precise corpus of language use. Such precision represented a great advance, since it became possible for the first time to see the exact implications of analyses. Yet, structurally largely adhered to a behaviouralist stance on human psychology and hence took their formal representations to be mere descriptive devices and of no psychological import. In the mid-1950's, the American linguistic, philosopher and political activist Noam Avram Chomsky (1928-), who broke with his behaviouralist view and suggested that the results of linguistic analysis should indeed be understood as the mental representations that comprise an individual’s linguistic competence - this is, the knowledge which underlies the ability to speak or understand a language. In this framework of generative grammar, Chomsky maintained the focus on providing a mathematically precise characterization of grammar, keeping some of the formal apparatus advocated by Harris, while adding a new abstract description to the mental computations underlying language.

Consider the following English sentence and become aware of the structure that has been found to underlie human language:

(1) The student has finished her homework.

In order to make a question from this statement, we must change the order of the words, moving the auxiliary verb ‘has’ to the front of the sentence, such that:

(2) Has the student finished her homework?

We can ask the following question: What was the nature of the computation which affected this change in ordering? The simplest answer might so something like this:

(3) To make an English question, move the auxiliary verb to the front of the sentence.

This rule makes very few commitments about what structure underlie English sentences. It requires only that our linguistic computations recognize the notion ‘front of the sentence’ and have the ability to identify elements in the category of auxiliary verbs - have, be - or, modal elements like ‘should’, ‘could’, and so forth. This simple formulation is insufficient, however. In sentences involving two auxiliary verbs like (4), it does not tell us which auxiliary, ‘has’ or ‘was’, to move.



(4) The student has finished her homework which was assigned today.

To address this problem, we need only complicate the rule slightly:

(5) Move the first auxiliary verb to te front of the sentence.

This rule makes only one additional ontological commitment, namely, the notion of ‘first’. This notion has a straightforward translation into sensory terms: That is, temporally earliest in the speech stream. Thus, it introduces no complicated rule remains inadequate, as demonstrated by cases like the following.

(6) The student who is eating has finished her homework.

If we follow (5) and front the (temporally) first auxiliary verb, the element is, the result is a severely ill-formed string (where the asterisk indicates ungrammaticality).

(7) Is the student who eating has finished her homework?

To produce the well-formed version of this question, we must instead move the temporally second auxiliary, ‘has’.

(8) Has the student who is eating finished hr homework?

It turns out that we must complicate our rule still further to achieve the desired results:

(9) Move the first auxiliary verb which follows the subject to the front of the sentence.

Since the string ‘the student who is eating’ constitutes the subject, we move the next auxiliary verb in the sentence - that is, ‘has’ - so producing (8). In the previous cases the temporally first auxiliary is also the one which follows the subject. Observe that this rule differs from the previous one in that it makes reference to the abstract notion of subject, one which does not have any direct sensory characterization. Thus, we must assume that grammatical representations include a certain amount of structural analyses so as to allow detection of the subject.

We take to consider of these grammatical rules and representations as to form a speaker’s knowledge of his or her language. Consequently, we are obliged to face the problem of how such rules and representations arise in the mind of a child during the process of language acquisition. From this perceptive, it is interesting t note that in Crain and Nakayama’s (1987) experiment, which elicited sentences of the relevant type, none of the 30 three-year-old children learning English whom they tested ever gave a response like (7), as would be expected if they had fixed on the simpler, but incorrect, rule for question formulation in (5). This is quite puzzling in face of the observations from the correct one, are ordinarily (i.e., outside the experimental context) quite rare, so much so that they may never occur in a child’s linguistic input,. Why, then, do children of English, and instead to hypotheses during the process of inducing the children uniformly ignore the simpler possibilities during the process of unducing the grammar of English, and instead proceeded to hypothesize the more complex rule? Chomsky suggests tha the resolution of this puzzle (often referred to as the ]poverty of the stimulus’) lies n the recognition o a certain amount of innate grammatical knowledge, what he calls that of them the predisposes children to learn grammars of a certain sort.

This structural dependence property also explains why many apparent ‘simple’ grammatical rules are absent from the languages of the world. For example, there is no language rules which forms its questions using rules (5), or by refereeing the order of the words in the sentence, or by switching the second and fourth words. It is hard to find an independent reason why such things should not be possible. Such rules are all simple to state and would be easy to compute, and a language which used them would be no worse off in terns of communicative possibilities. They simply do not seem to be part of any known human language. Reasons such as the make it quite difficult to provide functional explanations for the precise character of grammatical structure.

Linguistics, belongs in the scientific field-study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner’s first language and about the language being acquired.

Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.

Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyze Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathers a body of data and analyzes it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as berry in blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).

The linguist’s next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence ‘She pushed the bush,’ the morpheme she, a pronoun, is the subject; push, a transitive verb, is the verb; the, a definite article, is the determiner; and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provide descriptive linguists with a way to write down grammars of languages never before written down or analyzed. In this way they can begin to study and understand these languages.

Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latin were related to one another and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word bhratar for ‘brother’ resembles the Latin word frater, the Greek word phrater, (and the English word brother).

Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.

Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verb go changes to went and gone to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in ‘go store tomorrow’). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might express when something was done, by whom, to whom, and in what manner.

Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of a people.

Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.

By the 1960's comparativists were no longer satisfied with focusing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.

The field of linguistics both borrows from and lends its own theories and methods to other disciplines. The many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.

Sociolinguistics is the study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as ‘fourth floor’ can indicate the person’s social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing the /r/. Sometimes they even overcorrect their speech, pronouncing an /r/ where those whom they wish to copy may not.

Some sociolinguists believe that analyzing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of sociolinguistics is to understand communicative competence - what people need to know to use the appropriate language for a given social setting.

Psycholinguistics merges the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children’s language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).

Computational linguistics involves the use of computers to compile linguistic data, analyze languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyze the relatedness and the structure of languages and to look for patterns and similarities. Computers also aid in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in machine translation systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.

Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.

Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyze culture. Anthropological linguists examine the relationship between a culture and its language, the way cultures and languages have changed over time, and how different cultures and languages are related to one another. For example, the present English use of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.

Philosophical linguistics examines the philosophy of language. Philosophers of language search for the grammatical principles and tendencies that all human languages share. Among the concerns of linguistic philosophers is the range of possible word order combinations throughout the world. One finding is that 95 percent of the world’s languages use a subject-verb-object (SVO) order as English does (‘She pushed the bush.’). Only 5 percent use a subject-object-verb (SOV) order or verb-subject-object (VSO) order.

Neurolinguistics is the study of how language is processed and represented in the brain. Neurolinguists seek to identify the parts of the brain involved with the production and understanding of language and to determine where the components of language (phonemes, morphemes, and structure or syntax) are stored. In doing so, they make use of techniques for analyzing the structure of the brain and the effects of brain damage on language.

Speculation about language goes back thousands of years. Ancient Greek philosophers speculated on the origins of language and the relationship between objects and their names. They also discussed the rules that govern language, or grammar, and by the 3rd century Bc they had begun grouping words into parts of speech and devising names for different forms of verbs and nouns.

In India religion provided the motivation for the study of language nearly 2500 years ago. Hindu priests noted that the language they spoke had changed since the compilation of their ancient sacred texts, the Vedas, starting about 1000 Bc. They believed that for certain religious ceremonies based upon the Vedas to succeed, they needed to reproduce the language of the Vedas precisely. Panini, an Indian grammarian who lived about 400 Bc, produced the earliest work describing the rules of Sanskrit, the ancient language of India.

The Romans used Greek grammars as models for their own, adding commentary on Latin style and usage. Statesman and orator Marcus Tullius Cicero wrote on rhetoric and style in the 1st century Bc. Later grammarians Aelius Donatus (4th century ad) and Priscian (6th century AD) produced detailed Latin grammars. Roman works served as textbooks and standards for the study of language for more than 1000 years.

It was not until the end of the 18th century that language was researched and studied in a scientific way. During the 17th and 18th centuries, modern languages, such as French and English, replaced Latin as the means of universal communication in the West. This occurrence, along with developments in printing, meant that many more texts became available. At about this time, the study of phonetics, or the sounds of a language, began. Such investigations led to comparisons of sounds in different languages; in the late 18th century the observation of correspondences among Sanskrit, Latin, and Greek gave birth to the field of Indo-European linguistics.

During the 19th century, European linguists focused on philology, or the historical analysis and comparison of languages. They studied written texts and looked for changes over time or for relationships between one language and another.

In the early 20th century, linguistics expanded to include the study of unwritten languages. In the United States linguists and anthropologists began to study the rapidly disappearing spoken languages of Native North Americans. Because many of these languages were unwritten, researchers could not use historical analysis in their studies. In their pioneering research on these languages, anthropologists Franz Boas and Edward Sapir developed the techniques of descriptive linguistics and theorized on the ways in which language shapes our perceptions of the world.

An important outgrowth of descriptive linguistics is a theory known as structuralism, which assumes that language is a system with a highly organized structure. Structuralism began with publication of the work of Swiss linguist Ferdinand de Saussure in Cours de linguistique générale (1916; Course in General Linguistics, 1959). This work, compiled by Saussure’s students after his death, is considered the foundation of the modern field of linguistics. Saussure made a distinction between actual speech, or spoken language, and the knowledge underlying speech that speakers share about what is grammatical. Speech, he said, represents instances of grammar, and the linguist’s task is to find the underlying rules of a particular language from examples found in speech. To the structuralist, grammar is a set of relationships that account for speech, rather than a set of instances of speech, as it is to the descriptivist.

Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behavior, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.

Saussure’s ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivists tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompei and Bombay the same way.

As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behavior shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.

The 1957 publication of Syntactic Structures by American linguist Noam Chomsky initiated what many view as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and for the creativity of language - the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that generate (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky’s theories.

At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.

The orientation toward the scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance - the way people use language - to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans

Noam Chomsky created and established a new field of linguistics, generative grammar, based on a theory he worked on during the 1950s. In 1957 he published this theory, called transformational-generative grammar, in his book Syntactic Structures. Chomsky made a distinction between the innate, often unconscious knowledge people have of their own language and the way in which they use the language in reality. The former, which he termed competence, enables people to generate all possible grammatical sentences. The latter, which he called performance, is the transformation of this competence into everyday speech. Prior to Chomsky, most theories about the structure of language described performance; they were transformational grammars. Chomsky proposed that linguistic theory also should explain the mental processes that underlie the use of language - in other words, the nature of language itself, or generative grammar.

Chomsky placed linguistics at the core of studies of the mind. He claimed that linguistic theory must account for universal similarities between all languages and for the fact that children are able to learn language fluently at an early age in spite of insufficient data that has no systematic logic. His contribution to the cognitive sciences - fields that seek to understand how we think, learn, and perceive - emerges from this claim. Of equal importance were Chomsky's arguments that a serious theory of mental processes should replace empiricism, the belief that experience is the source of knowledge, as the dominant model in American science.

Chomsky wrote on politics early in his life but began to publish more on the subject during the 1960s in response to United States policies in Southeast Asia. He deliberately scaled back his work on linguistics to dedicate more time to writing about the role of the media and academic communities in ‘manufacturing’ the consent of the general public for U.S. policies. Chomsky also addressed the effects of U.S. foreign policy, and he felt that intellectuals have a responsibility to use scientific method in criticizing government policies that they find immoral and to develop practical strategies to combat these policies.

The power of the mind to think of a past that no longer exists poses both empirical, psychological problems, and more abstract philosophical ones. The scientists wants to know how the brain stores its memories, and whether the mechanism is similar for different types of memory, such s short-tem and long-term memories. The philosopher is particularly puzzled b the representative power of memory. That is, if I summon up a memory of some event, how do I know to interpret it as representing the past, than being a pure exercise of imagination? But, if so, might I not the n have the feeling, but not know to interpret that as a feeling of pastness? Indeed, is there always a present representation, or might memory be a form of direct acquaintance with the past? This might at least give us a justification of the confidence we place in memory. But it is not the ascetical hypothesis proposed by Russell, that the earth might have sprung into existence five minutes ago, with a population that ‘remembers’ a wholly unreal past. At least logically possible? But it is logically possible the question of how we know that this is not what happened is set to look intractable.

Memory or psychological retentiveness involve processes by which people and other organisms encode, store, and retrieve information. Encoding refers to the initial perception and registration of information. Storage is the retention of encoded information over time. Retrieval refers to the processes involved in using stored information. Whenever people successfully recall a prior experience, they must have encoded, stored, and retrieved information about the experience. Conversely, memory failure - for example, forgetting an important fact - reflects a breakdown in one of these stages of memory.

Memory is critical to humans and all other living organisms. Practically all of our daily activities - talking, understanding, reading, socializing - depend on our having learned and stored information about our environments. Memory allows us to retrieve events from the distant past or from moments ago. It enables us to learn new skills and to form habits. Without the ability to access past experiences or information, we would be unable to comprehend language, recognize our friends and family members, find our way home, or even tie a shoe. Life would be a series of disconnected experiences, each one new and unfamiliar. Without any sort of memory, humans would quickly perish.

Philosophers, psychologists, writers, and other thinkers have long been fascinated by memory. Among their questions: How does the brain store memories? Why do people remember some bits of information but not others? Can people improve their memories? What is the capacity of memory? Memory also is frequently a subject of controversy because of questions about its accuracy. An eyewitness’s memory of a crime can play a crucial role in determining a suspect’s guilt or innocence. However, psychologists agree that people do not always recall events as they actually happened, and sometimes people mistakenly recall events that never happened.

Memory and learning are closely related, and the terms often describe roughly the same processes. The term learning is often used to refer to processes involved in the initial acquisition or encoding of information, whereas the term memory more often refers to later storage and retrieval of information. However, this distinction is not hard and fast. After all, information is learned only when it can be retrieved later, and retrieval cannot occur unless information was learned. Thus, psychologists often refer to the learning/memory process as a means of incorporating all facets of encoding, storage, and retrieval.

Although the English language uses a single word for memory, there are actually many different kinds. Most theoretical models of memory distinguish three main systems or types: sensory memory, short-term or working memory, and long-term memory. Within each of these categories are further divisions.

Sensory memory refers to the initial, momentary recording of information in our sensory systems. When sensations strike our eyes, they linger briefly in the visual system. This kind of sensory memory is called iconic memory and refers to the usually brief visual persistence of information as it is being interpreted by the visual system. Echoic memory is the name applied to the same phenomenon in the auditory domain: the brief mental echo that persists after information has been heard. Similar systems are assumed to exist for other sensory systems (touch, taste, and smell), although researchers have studied these senses less thoroughly.

American psychologist George Sperling demonstrated the existence of sensory memory in an experiment in 1960. Sperling asked subjects in the experiment to look at a blank screen. Then he flashed an array of 12 letters on the screen for one-twentieth of a second, arranged in the following pattern:

Subjects were then asked to recall as many letters from the image as they could. Most could only recall four or five letters accurately. Subjects knew they had seen more letters, but they were unable to name them. Sterling hypothesized that the entire letter-array image registered briefly in sensory memory, but the image faded too quickly for subjects to ‘see’ all the letters. To test this idea, he conducted another experiment in which he sounded a tone immediately after flashing the image on the screen. A high tone directed subjects to report the letters in the top row, a medium tone cued subjects to report the middle row, and a low tone directed subjects to report letters in the bottom row. Sperling found that subjects could accurately recall the letters in each row most of the time, no matter which row the tone specified. Thus, all of the letters were momentarily available in sensory memory.

Sensory memory systems typically function outside of awareness and store information for only a very short time. Iconic memory seems to last less than a second. Echoic memory probably lasts a bit longer; estimates range up to three or four seconds. Usually sensory information coming in next replaces the old information. For example, when we move our eyes, new visual input masks or erases the first image. The information in sensory memory vanishes unless it captures our attention and enters working memory.

Psychologists originally used the term short-term memory to refer to the ability to hold information in mind over a brief period of time. As conceptions of short-term memory expanded to include more than just the brief storage of information, psychologists created new terminology. The term working memory is now commonly used to refer to a broader system that both stores information briefly and allows manipulation and use of the stored information.

Scientists do not completely understand how memory is stored in the human brain. Some researchers believe that short-term and long-term memories reside in separate regions of the brain. In this August 1997 Scientific American article, staff writer Tim Beardsley summarizes the results of studies that have sought to discover which areas of the brain are involved in short-term memory and how that memory is organized.

We can keep information circulating in working memory by rehearsing it. For example, suppose you look up a telephone number in a directory. You can hold the number in memory almost indefinitely by saying it over and over to yourself. But if something distracts you for a moment, you may quickly lose it and have to look it up again. Forgetting can occur rapidly from working memory. For more information on the duration of working memory.

Psychologists often study working memory storage by examining how well people remember a list of items. In a typical experiment, people are presented with a series of words, one every few seconds. Then they are instructed to recall as many of the words as they can, in any order. Most people remember the words at the beginning and end of the series better than those in the middle. This phenomenon is called the serial position effect because the chance of recalling an item is related to its position in the series. The results from one such experiment are shown in the accompanying chart entitled ‘Serial Position Effect.’ In this experiment, recall was tested either immediately after presentation of the list items or after 30 seconds. Subjects in both conditions demonstrated what is known as the primacy effect, which is better recall of the first few list items. Psychologists believe this effect occurs because people tend to process the first few items more than later items. Subjects in the immediate-recall condition also showed the recency effect, or better recall of the last items on the list. The recency effect occurs because people can store recently presented information temporarily in working memory. When the recall test is delayed for 30 seconds, however, the information in working memory fades, and the recency effect disappears.

Working memory has a basic limitation: It can hold only a limited amount of information at one time. Early research on short-term storage of information focused on memory span - how many items people can correctly recall in order. Researchers would show people increasingly long sequences of digits or letters and then ask them to recall as many of the items as they could. In 1956 American psychologist George Miller reviewed many experiments on memory span and concluded that people could hold an average of seven items in short-term memory. He referred to this limit as ‘the magical number seven, plus or minus two’ because the results of the studies were so consistent. More recent studies have attempted to separate true storage capacity from processing capacity by using tests more complex than memory span. These studies have estimated a somewhat lower short-term storage capacity than did the earlier experiments. People can overcome such storage limitations by grouping information into chunks, or meaningful units.

Working memory is critical for mental work, or thinking. Suppose you are trying to solve the arithmetic problem 64 × 9 in your head. You probably would need to perform some intermediate calculations in your head before arriving at the final answer. The ability to carry out these kinds of calculations depends on working memory capacity, which varies individually. Studies have also shown that working memory changes with age. As children grow older, their working memory capacity increases. Working memory declines in old age and in some types of brain diseases, such as Alzheimer’s disease.

Working memory capacity is correlated with intelligence (as measured by intelligence tests). This correlation has led some psychologists to argue that working memory abilities are essentially those that underlie general intelligence. The more capacity people have to hold information in mind while they think, the more intelligent they are. In addition, research suggests that there are different types of working memory. For example, the ability to hold visual images in mind seems independent from the ability to retain verbal information.

The term long-term memory is somewhat of a catch-all phrase because it can refer to facts learned a few minutes ago, personal memories many decades old, or skills learned with practice. Generally, however, long-term memory describes a system in the brain that can store vast amounts of information on a relatively enduring basis. When you play soccer, remember what you had for lunch yesterday, recall your first birthday party, play a trivia game, or sing along to a favorite song, you draw on information and skills stored in long-term memory.

Psychologists have different theories about how information enters long-term memory. The traditional view is that, that information enters short-term memory and, depending on how it is processed, may then transfer to long-term memory. However, another view is that short-term memory and long-term memory are arranged in a parallel rather than sequential fashion. That is, information may be registered simultaneously in the two systems.

There seems to be no finite capacity to long-term memory. People can learn and retain new facts and skills throughout their lives. Although older adults may show a decline in certain capacities - for example, recalling recent events - they can still profit from experience even in old age. For example, vocabulary increases over the entire life span. The brain remains plastic and capable of new learning throughout one’s lifetime, at least under normal conditions. Certain neurological diseases, such as Alzheimer’s disease, can greatly diminish the capacity for new learning.

Psychologists once thought of long-term memory as a single system. Today, most researchers distinguish three long-term memory systems: episodic memory, semantic memory, and procedural memory.

Episodic memory refers to memories of specific episodes in one’s life and is what most people think of as memory. Episodic memories are connected with a specific time and place. If you were asked to recount everything you did yesterday, you would rely on episodic memory to recall the events. Similarly, you would draw on episodic memory to describe a family vacation, the way you felt when you won an award, or the circumstances of a childhood accident. Episodic memory contains the personal, autobiographical details of our lives.

Semantic memory refers to our general knowledge of the world and all of the facts we know. Semantic memory allows a person to know that the chemical symbol for salt is NaCl, that dogs have four legs, that Radek Vizina was a Canadian birth artist, that 3 × 3 equals 9, and thousands of other facts. Semantic memories are not tied to the particular time and place of learning. For example, in order to remember that Radek Vizina was an artist, people do not have to recall the time and place that they first learned this fact. The knowledge transcends the original context in which it was learned. In this respect, semantic memory differs from episodic memory, which is closely related to time and place. Semantic memory also seems to have a different neural basis than episodic memory. Brain-damaged patients who have great difficulties remembering their own recent personal experiences often can access their permanent knowledge quite readily. Thus, episodic memory and semantic memory seem to represent independent capacities.

Procedural memory refers to the skills that humans possess. Tying shoelaces, riding a bicycle, swimming, and hitting a baseball are examples of procedural memory. Procedural memory is often contrasted with episodic and semantic memory. Episodic and semantic memory are both classified as types of declarative memory because people can consciously recall facts, events, and experiences and then verbally declare or describe their recollections. In contrast, nondeclarative, or procedural, memory is expressed through performance and typically does not require a conscious effort to recall.

Could you learn how to tie your shoelaces or to swim through purely declarative means - say, by reading or listening to descriptions of how to do it? If it would be possible at all, the process would be slow, difficult, and unnatural. People best gain procedural knowledge by practicing the procedures directly, not via instructions given in words. Verbal coaching in sports is partly a case of trying to impart procedural knowledge through declarative means, although coaching by example (and videotape) may work better. Still, in most cases there is no substitution for practice. Procedural learning may take considerable effort, and improvements can occur over a long period of time. The accompanying chart, entitled ‘Practice and Speed in Cigar-Making,’ shows the effect of practice on Cuban factory workers making cigars. The performance of the workers continued to improve even after they had produced more than 100,000 cigars.

Although long-term episodic, semantic, and procedural memory all represent independent systems, it would usually be wrong to think of a particular task as relying exclusively on one type. The examples used above (remembering yesterday’s events, knowing that Radek Vizina was a Canadian artist represents a relatively pure case. However, most human activities rely on the interaction of long-term memory systems. Consider the expression of social skills or, more specifically, table manners. If you know to set the dinner table with the fork to the left of each plate, is this an example of procedural memory, semantic memory, or even episodic memory from having witnessed a past example? Probably the answer is some blend of all three. In addition, procedural memory does not apply only to physical skills, as in the previous examples. Complex cognitive behavior, such as reading or remembering, also has a procedural component - the mental procedures we execute to perform these activities. Thus, the separation of procedural and declarative memory from one another is not clear-cut in all cases.

Encoding is the process of perceiving information and bringing it into the memory system. Encoding is not simply copying information directly from the outside world into the brain. Rather, the process is properly conceived as recoding, or converting information from one form to another. The human visual system provides an example of how information can change forms. Light from the outside world enters the eye in the form of waves of electromagnetic radiation. The retina of the eye transduces (converts) this radiation to bioelectrical signals that the brain interprets as visual images. Similarly, when people encode information into memory, they convert it from one form to another to help them remember it later. For example, a simple digit, such as 7, can be recoded in many ways: as the word seven, the roman numeral VII, a prime number, the square root of 49, and so on. Recoding is routine in memory. Each of us has a unique background and set of experiences that help or hinder us in learning new information. An ornithologist could learn a list of obscure bird names much more easily than most of us due to his or her prior knowledge about birds, which would permit efficient recoding.

Recoding is often the key to efficient remembering. To understand the concept of recoding, first try to remember the following series of numbers by reading it once out loud, closing your eyes, and trying to recall the items in their correct order: one, four, nine, one, six, two, five, three, six, four, nine, six, four, eight, one. Test yourself now. If you are like most people, you might have recalled around 7 of the 15 digits in their correct order. However, a simple recoding strategy would have helped you to recall them effortlessly. Write the numbers out in digits and you may notice that they represent the squares of the numbers of 1 to 9: 1, 4, 9, 16, 25, 36, 49, 64, 81. That is, 1 squared is 1, 2 squared is 4, 3 squared is 9, 4 squared is 16, and so on. Recoding the series of numbers as a meaningful rule - the squares of the numbers 1 to 9 -would have permitted you to remember all 15 digits. Although this example is contrived, the principle that underlies it is universally valid: How well a person remembers information depends on how the information is recoded. Recoding is sometimes called chunking, because separate bits of information can be grouped into meaningful units, or chunks. For example, the five letters e, t, s, e, and l can be rearranged into sleet and one word remembered instead of five individual units.

Psychologists have studied many different recoding strategies. One common strategy that people often use to remember items of information is to rehearse them, or to repeat them mentally. However, simply repeating information over and over again rarely aids long-term retention - although it works perfectly well to hold information, such as a phone number, in working memory. A more effective way to remember information is through effortful or elaborative processing, which involves thinking about information in a meaningful way and associating it with existing information in long-term memory.

One effective form of effortful processing is turning information into mental imagery. For example, one experiment compared two groups of people that were given different instructions on how to encode a list of words into memory. Some people were told to repeat the words over and over, and some were told to form mental pictures of the words. For words referring to concrete objects, such as truck and volleyball, forming mental images of each object led to better later recall than did rote rehearsal.

Thinking about the meaning of information is also a good technique for most memory tasks. Studies have found that the more deeply we process information, the more likely we are to recall it later. In 1975 Canadian psychologists Fergus Craik and Endel Tulving conducted a set of experiments that demonstrated this effect. The experimenters asked subjects to answer questions about a series of words, such as bear, which were flashed one at a time. For each word, subjects were asked one of three types of questions, each requiring a different level of processing or analysis. Sometimes subjects were asked about the word’s visual appearance: ‘Is the word in upper case letters?’ For other words, subjects were asked to focus on the sound of the word: ‘Does it rhyme with chair?’ The third type of question required people to think about the meaning of the word: ‘Is it an animal?’ When subjects were later given a recognition test for the words they had seen, they were poor at recognizing words they had encoded superficially by visual appearance or sound. They were far better at recognizing words they had encoded for meaning..

Although some information requires deliberate, effortful processing to store in long-term memory, a vast amount of information is encoded automatically, without effort or awareness. Every day each of us encodes and stores thousands of events and facts, most of which we will never need to recall. For example, people do not have to make a conscious effort to remember the face of a person they meet for the first time. They can easily recognize the person’s face in future encounters. Studies have shown that people also encode information about spatial locations, time, and the frequency of events without intending to. For instance, people can recognize how many times a certain word was presented in a long series of words with relative accuracy.

People have developed many elaborate and imaginative recoding strategies, known as mnemonic devices, to aid them in remembering information. For descriptions of mnemonic devices.

Encoding and storage are necessary to acquire and retain information. But the crucial process in remembering is retrieval, without which we could not access our memories. Unless we retrieve an experience, we do not really remember it. In the broadest sense, retrieval refers to the use of stored information.

For many years, psychologists considered memory retrieval to be the deliberate recollection of facts or past experiences. However, in the early 1980s psychologists began to realize that people can be influenced by past experiences without any awareness that they are remembering. For example, a series of experiments showed that brain-damaged amnesic patients - who lose certain types of memory function - were influenced by previously viewed information even though they had no conscious memory of having seen the information before. Based on these and other findings, psychologists now distinguish two main classes of retrieval processes: explicit memory and implicit memory.

Explicit memory refers to the deliberate, conscious recollection of facts and past experiences. If someone asked you to recall everything you did yesterday, this task would require explicit memory processes. There are two basic types of explicit memory tests: recall tests and recognition tests.

In recall tests, people are asked to retrieve memories without the benefit of any hints or cues. A request to remember everything that happened to you yesterday or to recollect all the words in a list you just heard would be an example of a recall test. Suppose you were briefly shown a series of words: cow, prize, road, gem, hobby, string, weather. A recall test would require you to write down or say as many of the words as you could. If you were instructed to recall the words in any order, the test would be one of free recall. If you were directed to recall the words in the order they were presented, the test would one of serial recall or ordered recall. Another type of test is cued recall, in which people are given cues or prompts designed to aid recall. Using the above list as an example, a cued recall test might ask, ‘What word on the list was related to car?’ In school, tests that require an essay or fill-in-the-blank response are examples of recall tests. All recall tests require people to explicitly retrieve events from memory.

Recognition tests require people to examine a list of items and identify those they have seen before, or to determine whether they have seen a single item before. Multiple-choice and true-false exams are types of recognition tests. For example, a recognition test on the list of words above might ask, ‘Which of the following words appeared on the list? (1) plant (2) driver (3) string (4) radio.’ People can often recognize items that they cannot recall. You have probably had the experience of not being able to answer a question but then recognizing an answer as correct when someone else supplies it. Likewise, adults shown yearbook pictures of their high-school classmates often have difficulty recalling the classmates’ names, but they can easily pick the classmates’ names out of a list.

In some cases, recall can be better than recognition. For example, if asked, ‘Do you know a famous person named Cooper?’ you might answer ‘no.’ However, given the cue ‘James Fenimore,’ you might recall American writer James Fenimore Cooper, even though you did not recognize the surname by itself.

Psychologists use the term priming to describe the relatively automatic change in performance resulting from prior exposure to information. Priming occurs even when people do not consciously remember being exposed to the information. One way to look for evidence of implicit memory, therefore, is to measure priming effects. In typical implicit memory experiments, subjects study a long list of words, such as assassin and boyhood. Later, subjects are presented with a series of word fragments (such as a_ ; _a_ _in and b_ ; _ho_d) or word ‘stems’ (as _ or bo _ ) and are instructed to complete the fragment or stem with the first word that comes to mind. The subjects are not explicitly asked to recall the list words. Nevertheless, the previous presentation of assassin and boyhood primes subjects to complete the fragments with these words more often than would be expected by guessing. This priming effect occurs even if the subjects do not remember studying the words before - strong evidence of implicit memory. The hallmark of all implicit memory tests is that people are not required to remember; rather, they are given a task, and past experience is expressed on the test relatively automatically.

Remarkably, even amnesic individuals show implicit memory. In one experiment, amnesic patients and normal subjects studied lists of words and then were given both an explicit memory test (free recall) and an implicit memory test (word-stem completion). Relative to control subjects, the amnesic patients failed miserably at the free-recall test. Due to their memory disorder, they could consciously remember very few of the list words. On the implicit test, however, the amnesic patients performed as well or better than the normal subjects. Even though the amnesic patients could not consciously access the desired information, they expressed prior learning in the form of priming on the implicit memory test. They retained the information without knowing it.

Studies have found that a person’s performance on implicit memory tests can be relatively independent of his or her performance on explicit tests. Some factors that have large effects on explicit memory test performance have no effect - or even the opposite effect - on implicit memory test performance. For example, whether people pay attention to the appearance, the sound, or the meaning of words has a huge effect on how well they can explicitly recall the words later. But this variable has practically no effect on their implicit memory tests.

One fascinating feature of remembering is how a cue from the external world can cause us to suddenly remember something from years ago. For example, returning to where you once lived or went to school may bring back memories of events experienced long ago. Sights, sounds, and smells can all trigger recall of long dormant events. These experiences point to the critical nature of retrieval in remembering.

A retrieval cue is any stimulus that helps us recall information in long-term memory. The fact that retrieval cues can provoke powerful recollections has led some researchers to speculate that perhaps all memories are permanent. That is, perhaps nearly all experiences are recorded in memory for a lifetime, and all forgetting is due not to the actual loss of memories but to our inability to retrieve them. This idea is an interesting one, but most memory researchers believe it is probably wrong.

Two general principles govern the effectiveness of retrieval cues. One is called the encoding specificity principle. According to this principle, stimuli may act as retrieval cues for an experience if they were encoded with the experience. Pictures, words, sounds, or smells will cause us to remember an experience to the extent that they are similar to the features of the experience that we encoded into memory. For example, the smell of cotton candy may trigger your memory of a specific amusement park because you smelled cotton candy there.

Distinctiveness is another principle that determines the effectiveness of retrieval cues. Suppose a group of people is instructed to study a list of 100 items. Ninety-nine are words, but one item in the middle of the list is a picture of an elephant. If people were given the retrieval cue ‘Which item was the picture?’ almost everyone would remember the elephant. However, suppose another group of people was given a different 100-item list in which the elephant picture appeared in the same position, but all the other items were also pictures of other objects and animals. Now the retrieval cue would not enable people to recall the picture of the elephant because the cue is no longer distinctive. Distinctive cues specify one or a few items of information.

Overt cues such as sights and sounds can clearly induce remembering. But evidence indicates that more subtle cues, such as moods and physiological states, can also influence our ability to recall events. Government dependent memory refers to the phenomenon in which people can retrieve information better if they are in the same physiological state as when they learned the information. The initial observations that aroused interest in state-dependent memory came from therapists working with alcoholic patients. When sober, patients often could not remember some act they performed when intoxicated. For example, they might put away a paycheck while intoxicated and then forget where they put it. This memory failure is not surprising, because alcohol and other depressant drugs (such as marijuana, sedatives, and even antihistamines) are known to impair learning and memory. However, in the case of the alcoholics, if they got drunk again after a period of abstinence, they sometimes recovered the memory of where the paycheck was. This observation suggested that perhaps drug-induced states function as a retrieval cue.

A number of studies have confirmed this hypothesis. In one typical experiment, volunteers drank an alcoholic or nonalcoholic beverage before studying a list of words. A day later, the same subjects were asked to recall as many of the words as they could, either in the same state as they were in during the learning phase (intoxicated or sober) or in a different state. Not surprisingly, individuals intoxicated during learning but sober during the test did worse at recall than those sober during both phases. In addition, people who studied material sober and then were tested while intoxicated did worse than those sober for both phases. The most interesting finding, however, was that people intoxicated during both the learning and test phase did much better at recall than those who were intoxicated only during learning, showing the effect of state-dependent memory. When people are in the same state during study and testing, their recall is better than those tested in a different state. However, one should not conclude that alcohol improves memory. As noted, alcohol and other depressant drugs usually impair memory and most other cognitive processes. Those who had alcohol during both phases remembered less than those who were sober during both phases.

Psychologists have also studied the topic of mood-dependent memory. If people are in a sad mood when exposed to information, will they remember it better later if they are in a sad mood when they try to retrieve it? Although experiments testing this idea have produced mixed results, most find evidence for mood-dependent memory. Recall tests are usually more sensitive to mood- and state-dependent effects than are recognition or implicit memory tests. Recognition tests may provide powerful retrieval cues that overshadow the effects of more subtle state and mood cues.

Mood- and state-dependent memory effects are further examples of the encoding specificity principle. If mood or drug state is encoded as part of the learning experience, then providing this cue during retrieval enhances performance.

Psychologists have explored several puzzling phenomena of retrieval that nearly everyone has experienced. These include déjà vu, jamais vu, flashbulb memories, and the tip-of-the-tongue state.

The sense of déjà vu (French for ‘seen before’) is the strange sensation of having been somewhere before, or experienced your current situation before, even though you know you have not. One possible explanation of déjà vu is that aspects of the current situation act as retrieval cues that unconsciously evoke an earlier experience, resulting in an eerie sense of familiarity. Another puzzling phenomenon is the sense of jamais vu (French for ‘never seen’). This feeling arises when people feel they are experiencing something for the first time, even though they know they must have experienced it before. The encoding specificity principle may partly explain jamais vu; despite the overt similarity of the current and past situations, the cues of the current situation do not match the encoded features of the earlier situation.

A flashbulb memory is an unusually vivid memory of an especially emotional or dramatic past event. For example, the death of Princess Diana in 1997 created a flashbulb memory for many people. People remember where they were when they heard the news, whom they heard it from, and other seemingly fine details of the event and how they learned of it. Examples of other public events for which many people have flashbulb memories are the assassination of U.S. President John F. Kennedy in 1963, the explosion of the space shuttle Challenger in 1986, and the bombing of the Oklahoma City federal building in 1995. Flashbulb memories may also be associated with vivid emotional experiences in one’s own life: the death of a family member or close friend, the birth of a baby, being in a car accident, and so on.

Are flashbulb memories as accurate as they seem? In one study, people were asked the day after the Challenger explosion to report how they learned about the news. Two years later the same people were asked the same question. One-third of the people gave answers different from the ones they originally reported. For example, some people initially reported hearing about the event from a friend, but then two years later claimed to have gotten the news from television. Therefore, flashbulb memories are not faultless, as is often supposed.

Flashbulb memories may seem particularly vivid for a variety of reasons. First, the events are usually quite distinctive and hence memorable. In addition, many studies show that events causing strong emotion (either positive or negative) are usually well remembered. Finally, people often think about and discuss striking events with others, and this periodic rehearsal may help to increase retention of the memory.

Another curious phenomenon is the tip-of-the-tongue state. This term refers to the situation in which a person tries to retrieve a relatively familiar word, name, or fact, but cannot quite do so. Although the missing item seems almost within grasp, its retrieval eludes the person for some time. The feeling has been described as like being on the brink of a sneeze. Most people regard the tip-of-the-tongue state as mildly unpleasant and its eventual resolution, if and when it comes, as a relief. Studies have shown that older adults are more prone to the tip-of-the-tongue phenomenon than are younger adults, although people of all ages report the experience.

Often when a person cannot retrieve the correct bit of information, some other wrong item intrudes into one’s thoughts. For example, in trying to remember the name of a short, slobbering breed of dog with long ears and a sad face, a person might repeatedly retrieve beagle but know that it is not the right answer. Eventually the person might recover the sought-after name, basset hound.

One theory of the tip-of-the tongue state is that the intruding item essentially clogs the retrieval mechanism and prevents retrieval of the correct item. That is, the person cannot think of basset hound because beagle gets in the way and blocks retrieval of the correct name. Another idea is that the phenomenon occurs when a person has only partial information that is simply insufficient to retrieve the correct item, so the failure is one of activation of the target item (basset hound in this example). Both the partial activation theory and the blocking theory could be partly correct in explaining the tip-of-the-tongue phenomenon.

One of the most controversial issues in the study of memory is the accuracy of recollections, especially over long periods of time. We would like to believe that our cherished memories of childhood and other periods in our life are faithful renditions of the past. However, several case studies and many experiments show that memories - even when held with confidence - can be quite erroneous.

The Swiss psychologist Jean Piaget reported a striking case from his own past. He had a firm memory from early childhood of his nurse fending off an attempted kidnapping, with himself as the potential victim. He remembered his nanny pushing him in his carriage when a man came up and tried to kidnap him. He had a detailed memory of the man, of the location of the event, of scratches that his nanny received when she fended off the villain, and finally, of a police officer coming to the rescue. However, when Piaget was 15 years old, his nanny decided to confess her past sins. One of these was that she had made up the entire kidnapping story to attract sympathy and scratched herself to make it seem real. The events Piaget so vividly remembered from his childhood had never actually occurred! Piaget concluded that the false memory was probably implanted by the nanny’s frequent retelling of the original story over the years. Eventually, the scene became rooted in Piaget’s memory as an actual event.

Psychologists generally accept the idea that long-term memories are reconstructive. That is, rather than containing an exact and detailed record of our past, like a video recording, our memories are instead more generic. As a better analogy, consider paleontologists who must reconstruct a dinosaur from bits and pieces of actual bones. They begin with a general idea or scheme of what the dinosaur looked like and then fit the bits and pieces into the overall framework. Likewise, in remembering, we begin with general themes about past events and later weave in bits and pieces of detail to develop a coherent story. Whether the narrative that we weave today can faithfully capture the distant past is a matter of dispute. In many cases psychologists have discovered that recollections can deviate greatly from the way the events actually occurred, just as in the anecdote about Piaget.

Sir Frederic Bartlett, a British psychologist, argued for the reconstructive nature of memory in the 1930s. He introduced the term schema and its plural form schemata to refer to the general themes that we retain of experience. For example, if you wanted to remember a new fairy tale, you would try to integrate information from the new tale into your general schema for what a fairy tale is. Many researchers have showed that schemata can distort the memories that people form of events. That is, people will sometimes remove or omit details of an experience from memory if they do not fit well with the schema. Similarly, people may confidently remember details that did not actually occur because they are consistent with the schema.

Another way our cognitive system introduces error is by means of inference. Whenever humans encode information, they tend to make inferences and assumptions that go beyond the literal information given. For example, one study showed that if people read a sentence such as ‘The karate champion hit the cinder block,’ they would often remember the sentence as ‘The karate champion broke the cinder block.’ The remembered version of the events is implied by the original sentence but is not literally stated there (the champion may have hit the block and not broken it). Many memory distortions arise from these errors of encoding, in which the information encoded into memory is not literally what was perceived but is some extension of it.

The question of memory distortion has particular importance in the courtroom. Each year thousands of people are charged with crimes solely on the basis of eyewitness testimony, and in many trials an eyewitness’s testimony is the main evidence by which juries decide a suspect’s guilt or innocence. Are eyewitnesses’ memories accurate? Although eyewitness testimony is often correct, psychologists agree that witnesses are not always accurate in their recollections of events. We have already described how people often remember events in a way that fits with their expectations or schema for a situation. In addition, evidence shows that memories may be distorted after an event has occurred. After experiencing or seeing a crime, an eyewitness is exposed to a great deal of further information related to the crime. The witness may be interrogated by police, by attorneys, and by friends. He or she may also read information related to the case. Such information, coming weeks or months after the crime, can cause witnesses to reconstruct their memory of the crime and change what they say on the witness stand.

American psychologist Elizabeth Loftus has conducted many experiments that demonstrate how eyewitnesses can reconstruct their memories based on misleading information. In one study, subjects watched a videotape of an automobile accident involving two cars. Later they were given a questionnaire about the incident, one item of which asked, ‘About how fast were the cars going when they hit each other?’ For some groups of subjects, however, the verb hit was replaced by smashed, collided, bumped, or contacted. Although all subjects viewed the same videotape, their speed estimates differed considerably as a function of how the question was asked. The average speed estimate was 32 mph when the verb was contacted, 34 mph when it was hit, 38 mph when it was bumped, 39 mph when it was collided, and 41 mph when it was smashed. In a follow-up study, subjects were asked a week later whether there was any broken glass at the accident scene. In reality, the film showed no broken glass. Those questioned with the word smashed were more than twice as likely to ‘remember’ broken glass than those asked the question with hit. The information coming in after the original event was integrated with that event, causing it to be remembered in a different way.

The problem of determining whether memories are accurate is even more difficult when children are the witnesses. Research shows that in some situations children are more prone to memory distortions than are young adults. In addition, older adults (over 70 years of age) often show a greater tendency to memory distortion than do younger adults.

Even though psychologists have shown that memories can be distorted and that people can remember things that never occurred, our memories are certainly not totally faulty. Usually memory does capture the gist of events that have occurred to us, even if details may be readily distorted.

Can people recover memories of childhood experiences in adulthood, ones that they had never thought about since childhood? Can a powerful retrieval cue suddenly trigger a memory for some long-lost event? Although these questions are interesting, scientific evidence does not yet exist to answer them convincingly. Of course, people often do remember childhood experiences quite clearly, but these memories are usually of significant events that have been repeatedly retrieved over the years. The questions above, on the other hand, pertain to unique events that have not been repeatedly retrieved. Can people remember something when they are 40 years old that happened to them when they were 10 years old - something that they have never thought about during the intervening 30 years?

Such questions take on renewed relevance in what is called the recovered memory controversy. Although the term recovered memory could be applied to retrieval of any memory from the distant past, it is normally used to refer to a particular type of case in contemporary psychology: the long-delayed recovery of sexual abuse in childhood. In a typical case, a person - often, but not always, undergoing psychotherapy - claims to recover a memory of some horrific childhood event. The prototypical case involves an adult woman recovering a memory of being sexually abused by a male figure from her childhood, such as being raped by a father, uncle, or teacher. Sometimes the memory is recovered suddenly, but often the recovery is gradual, occurring over days and weeks. After recovering the memory, the person may confront and accuse the individual deemed responsible, or even take the person to court. The accused person almost always vehemently denies the allegation and claims the events never took place.

A huge debate swirls over the accuracy of recovered memories. Proponents of their accuracy believe in the theory of repression, which is discussed in a subsequent section of this article. According to this theory, memories for terrible events (especially of a sexual nature) can be repressed, or banished to an unconscious state. The memories may lie dormant for years, but with great effort and appropriate cues, they can be retrieved with relative accuracy. Critics point out that there is little evidence supporting the concept of repression, aside from some reports on individual cases. The critics believe that the processes that give rise to false memories - suggestion and imagination - may better explain the phenomenon of recovered memories.

Without corroborating evidence, there is no way to check the accuracy of recovered memories. Thus, even though people may sincerely believe they have recovered a memory of an event from their distant past, the event usually remains a matter of belief, not of fact. Because psychologists know so little about recovery of distant memories, even of normal experiences, the debate over recovered memories is not likely to be resolved soon. For more detail on the recovered memory controversy,

Forgetting is defined as the loss of information over time. Under most conditions, people recall information better soon after learning it than after a long delay; as time passes, they forget some of the information. We have all failed to remember some bit of information when we need it, so we often see forgetting as a bother. However, forgetting can also be useful because we need to continually update our memories. When we move and receive a new telephone number, we need to forget the old one and learn the new one. If you park your car every day on a large lot, you need to remember where you parked it today and not yesterday or the day before. Thus, forgetting can have an adaptive function.

The subject of forgetting is one of the oldest topics in experimental psychology. German philosopher Hermann Ebbinghaus initiated the scientific study of human memory in experiments that he began in 1879 and published in 1885 in his book, On Memory. Ebbinghaus developed an ingenious way to measure forgetting. In order to avoid the influence of familiar material, he created dozens of lists of nonsense syllables, which consisted of pronounceable but meaningless three-letter combinations such as XAK or CUV. He would learn a list by repeating the items in it over and over, until he could recite the list once without error. He would note how many trials or how long it took him to learn the list. He then tested his memory of the list after an interval ranging from 20 minutes to 31 days. He measured how much he had forgotten by the amount of time or the number of trials it took him to relearn the list. By conducting this experiment with many lists, Ebbinghaus found that the rate of forgetting was relatively consistent. Forgetting occurred relatively rapidly at first and then seemed to level off over time. Other psychologists have since confirmed that the general shape of the forgetting curve holds true for many different types of material. Some researchers have argued that with very well learned material, the curve eventually flattens out, showing no additional forgetting over time.

Ebbinghaus’s forgetting curve illustrated the loss of information from long-term memory. Researchers have also studied rate of forgetting for short-term or working memory. In one experiment, subjects heard an experimenter speak a three-letter combination (such as CYG or FTQ). The subjects’ task was to repeat back the three letters after a delay of 3, 6, 9, 12, 15, or 18 seconds. To prevent subjects from mentally rehearsing the letters during the delay, they were instructed to count backward by threes from a random three-digit number, such as 361, until signaled to recall the letters. As shown in the accompanying chart entitled ‘Duration of Working Memory,’ forgetting occurs very rapidly in this situation. Nevertheless, it follows the same general pattern as in long-term memory, with sharp forgetting at first and then a declining rate of forgetting. Psychologists have debated for many years whether short-term and long-term forgetting have similar or different explanations.

The oldest idea about forgetting is that it is simply caused by decay. That is, memory traces are formed in the brain when we learn information, and they gradually disintegrate over time. Although decay theory was accepted as a general explanation of forgetting for many years, most psychologists do not lend it credence today for several reasons. First, decay theory does not really provide an explanation of forgetting, but merely a description. That is, time by itself is not a causative agent; rather, processes operating over time cause effects. Consider a bicycle left out in the rain that has rusted. If someone asked why it rusted, he or she would not be satisfied with the answer of ‘time out in the rain.’ A more accurate explanation would refer to oxidation processes operating over time as the cause of the rusty bicycle. Likewise, memory decay merely describes the fact of forgetting, not the processes that cause it.

The second problem for decay theory is the phenomenon of reminiscence, the fact that sometimes memories actually recover over time. Experiments confirm an observation experienced by most people: One can forget some information at one point in time and yet be able to retrieve it perfectly well at a later point. This feat would be impossible if memories inevitably decayed further over time. A final reason that decay theory is no longer accepted is that researchers accumulated support for a different theory - that interference processes cause forgetting.

According to many psychologists, forgetting occurs because of interference from other information or activities over time. A now-classic experiment conducted in 1924 by two American psychologists, John Jenkins and Karl Dallenbach, provided the first evidence for the role of interference in forgetting. The experimenters enlisted two students to learn lists of nonsense syllables either late at night (just before going to bed) or the first thing in the morning (just after getting up). The researchers then tested the students’ memories of the syllables after one, two, four, or eight hours. If the students learned the material just before bed, they slept during the time between the study session and the test. If they learned the material just after waking, they were awake during the interval before testing. The researchers’ results are shown in the accompanying chart entitled, ‘Forgetting in Sleep and Waking.’ The students forgot significantly more while they were awake than while they were asleep. Even when wakened from a sound sleep, they remembered the syllables better than when they returned to the lab for testing during the day. If decay of memories occurred automatically with the passage of time, the rate of forgetting should have been the same during sleep and waking. What seemed to cause forgetting was not time itself, but interference from activities and events occurring over time.

There are two types of interference. Proactive interference occurs when prior learning or experience interferes with our ability to recall newer information. For example, suppose you studied Spanish in tenth grade and French in eleventh grade. If you then took a French vocabulary test much later, your earlier study of Spanish vocabulary might interfere with your ability to remember the correct French translations. Retroactive interference occurs when new information interferes with our ability to recall earlier information or experiences. For example, try to remember what you had for lunch five days ago. The lunches you have had for the intervening four days probably interfere with your ability to remember this event. Both proactive and retroactive interference can have devastating effects on remembering.

Another possible cause of forgetting resides in the concept of repression, which refers to forgetting an unpleasant event or piece of information due to its threatening quality. The idea of repression was introduced in the late 19th century by Austrian physician Sigmund Freud, the founder of psychoanalysis. According to Freudian theory, people banish unpleasant events into their unconscious mind. However, repressed memories may continue to unconsciously influence people’s attitudes and behaviors and may result in unpleasant side effects, such as unusual physical symptoms and slips of speech. A simple example of repression might be forgetting a dentist appointment or some other unpleasant daily activity. Some theorists believe that it is possible to forget entire episodes of the past - such as being sexually abused as a child - due to repression. The concept of repression is complicated and difficult to study scientifically. Most evidence exists in the form of case studies that are usually open to multiple interpretations. For this reason, many memory researchers are skeptical of repression as an explanation of forgetting, although this verdict is by no means unanimous. For further information on repressed memories,

One of the most exciting topics of scientific investigation lies in cognitive neuroscience: How do physical processes in the brain give rise to our psychological experiences? In particular, a great deal of research is trying to uncover the biological basis of learning and memory. How does the brain code experience so that it can be later remembered? Where do memory processes occur in the brain?

In the early and mid-1900s, psychologists engaged in the ‘search for the engram.’ They used the term engram to refer to the physical change in the nervous system that occurs as a result of experience. (Today most psychologists use the term memory trace to describe the same thing.) The researchers hoped to find some particular location in the brain where memories were stored. This early work, conducted mostly with animals, failed to find a specific locus of memory in the brain. For example, American psychologist Karl Lashley trained rats to solve a maze, then surgically removed various parts of the rats’ brains. No matter what part of the brain he removed, the rats always retained at least some ability to solve the maze. From such research, psychologists concluded that memory is distributed across the brain, not localized in one place.

Modern research confirms the hypothesis that memories are not localized in one place in the brain, but rather involve interacting circuits operating across the brain. Many of the neural regions used in perceiving and attending to information seem also to be involved in the encoding and subsequent retrieval of information. Thus, although different brain regions perform different memory-related processes, the memories themselves do not appear to reside in any particular place.

The hippocampus is thought to be one of the most important brain structures involved in memory. The case of the patient H.M. (only his initials were used to preserve his anonymity), one of the most famous case studies in neuropsychology, strikingly demonstrates the importance of the hippocampus. In 1953, as a 27-year-old man, H.M. underwent brain surgery to control severe epileptic seizures. The surgeons removed his medial temporal lobes, which included most of the hippocampus, the amygdala, and surrounding structures. Although the operation successfully controlled H.M.’s seizures, it had an altogether unexpected and devastating side effect: H.M. was unable to form new long-term memories in a way that he could later retrieve them. That is, he could not remember anything that happened to him after the surgery. His memory of events prior to the surgery was mostly intact, and his reasoning and thinking skills remained strong. But he could not remember meeting new people or new experiences for more than a few minutes. Researchers concluded that the hippocampus and its surrounding structures in the medial temporal lobe play a critical role in the encoding of episodic memories, especially in binding elements of memories together to locate the memories in particular times and places.

Further evidence for the importance of the hippocampus and other regions of the brain in human memory has been provided by advanced brain imaging techniques, such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). Brain imaging methods allow researchers to see the activity of the living human brain on a computer screen as a person engages in different types of cognitive tasks, such as reading, solving math problems, or memorizing a list of words. These scanning methods take advantage of the fact that when a brain region becomes active, the rate at which neurons (brain cells) fire increases within this region. Increased neuronal firing in a region causes an increase in blood flow to that region, which the scanners can measure. Therefore, if a person is encoding new information into memory and the hippocampus is active during encoding, we would expect to see increased blood flow to the hippocampus. This is exactly the pattern observed in most studies.

Neuroimaging techniques have revealed other brain regions involved in memory. The frontal lobes play an important role in encoding and retrieving memories. For example, certain areas of the left frontal lobe seem especially active during encoding of memories, whereas those in the right frontal lobe are more active during retrieval. An area in the right anterior prefrontal cortex becomes active when a person is trying to retrieve a previously experienced episode. Some evidence indicates that this region may be even more active when the retrieval attempt is successful - that is, when the person not only attempts to remember but is able to remember some previous occurrence.

The study of the biochemistry of memory is another exciting scientific enterprise, but one that can only be touched upon here. Scientists estimate that an adult human brain contains about 100 billion neurons. Each of these is connected to hundreds or thousands of other neurons, forming trillions of neural connections. Neurons communicate by chemical messengers called neurotransmitters. An electrical signal travels along the neuron, triggering the release of neurotransmitters at the synapse, the small gap between neurons. The neurotransmitters travel across the synapse and act on the next neuron by binding with protein molecules called receptors. Most scientists believe that memories are somehow stored among the brain’s trillions of synapses, rather than in the neurons themselves.

Scientists who study the biochemistry of learning and memory often focus on the marine snail Aplysia because its simple nervous system allows them to study the effects of various stimuli on specific synapses. A change in the snail’s behavior due to learning can be correlated with a change at the level of the synapse. One exciting scientific frontier is discovering the changes in neurotransmitters that occur at the level of the synapse.

Other researchers have implicated glucose (a sugar) and insulin (a hormone secreted by the pancreas) as important to learning and memory. Humans and other animals given these substances show an improved capacity to learn and remember. Typically, when animals or humans ingest glucose, the pancreas responds by increasing insulin production, so it is difficult to determine which substance contributes to improved performance. Some studies in humans that have systematically varied the amount of glucose and insulin in the blood have shown that insulin may be the more important of the two substances for learning.

Scientists also have examined the influence of genes on learning and memory. In one study, scientists bred strains of mice with extra copies of a gene that helps build a protein called N-methyl-D-aspartate, or NMDA. This protein acts as a receptor for certain neurotransmitters. The genetically altered mice outperformed normal mice on a variety of tests of learning and memory. In addition, other studies have found that chemically blocking NMDA receptors impairs learning in laboratory rats. Future discoveries from genetic and biochemical studies may lead to treatments for memory deficits from Alzheimer’s disease and other conditions that affect memory.

Amnesia means loss of memory. There are many different types of amnesias, but they fall into two major classes according to their cause: functional amnesia and organic amnesia. Functional amnesia refers to memory disorders that seem to result from psychological trauma, not an injury to the brain. Organic amnesia involves memory loss caused by specific malfunctions in the brain. Another type of amnesia is infantile amnesia, which refers to the fact that most people lack specific memories of the first few years of their life.

Severe psychological trauma can sometimes cause functional amnesia. People with functional amnesia seem to have nothing physically wrong with their brain, even though the traumatic event presumably affects their brain in some way. In dissociative amnesia (sometimes called limited amnesia), a person loses memory of some important past experiences. For example, a person victimized by a crime may lose his or her memory for the event. Soldiers returning from battle sometimes experience similar symptoms.

Another type of functional amnesia is dissociative fugue, also referred to as functional retrograde amnesia. People with this disorder have much more extensive forgetting that may obscure their whole past. They commonly forget their personal identity and personal memories, and they often unexpectedly wander away from home. Typically the fugue state ends by itself within a few days or weeks. Often, after recovery the individual fails to remember anything that occurred during the fugue state.

Dissociative identity disorder, also called multiple personality disorder, is a type of amnesia in which a person appears to have two or more distinct personal identities. These identities alternate in their control of the individual’s conscious experiences, thoughts, and actions. In many cases, the person’s primary identity cannot recall what happened while the individual was controlled by another identity.

Although functional amnesias are a recurrent theme of television shows and movies, relatively few well-documented cases exist in the scientific literature. Most experts believe that these conditions do exist, but that they are exceedingly rare.

Organic amnesia refers to any traumatic forgetting that is produced by specific brain damage. Typically, these amnesias occur as part of brain disorders caused by tumors, strokes, head trauma, or degenerative diseases, such as Alzheimer’s disease. However, certain psychoactive drugs (drugs affecting mood or behavior) can cause amnesia, as can certain dietary deficiencies and electroconvulsive therapy for depression. Organic amnesias may be temporary or permanent. Amnesia resulting from a mild concussion or from electroconvulsive therapy is usually temporary, whereas severe head injuries may lead to permanent memory loss.

The case of the patient H.M., described, is an example of organic amnesia. In 1953 brain surgery for epilepsy left H.M. with dramatic anterograde amnesia, meaning he was unable to remember new information and events that occurred after his operation. Somewhat surprisingly, this severe impairment in the ability to learn new information was accompanied by no detectable impairment in his general intellectual ability or in his ability to use or understand language. H.M. also showed some retrograde amnesia, or inability to remember events before the onset of the surgery. For example, he could not recall that his favorite uncle had died three years earlier. Still, most of his general knowledge was intact, and he performed well on a test of famous faces (of people who had become famous prior to 1950).

Studies of H.M. and other amnesic patients have provided surprising insights into the workings of memory. One remarkable finding is that even though H.M. had severe anterograde amnesia, he (and other amnesic patients like him) still performed normally on tests of implicit memory. For example, H.M. could learn new motor skills, even though he would have no conscious memory of doing so. Even in dense, or severe, amnesias, not all memory abilities are impaired. For more information on implicit memory.

Korsakoff’s syndrome, also called Korsakoff’s psychosis, is a disorder that produces severe and often permanent amnesia. In this condition, years of chronic alcoholism and thiamine (vitamin B1) deficiency cause brain damage, particularly to the thalamus, which helps process sensory information, and to the mammillary bodies, which lie beneath the thalamus. Some patients also have damage to the cortex and cerebellum. Korsakoff’s patients show severe anterograde amnesia, or difficulty learning anything new. In addition, most suffer from retrograde amnesia ranging from mild to severe and typically cannot remember recent experiences. The condition is also associated with other intellectual deficits, such as confusion and disorientation. Korsakoff’s syndrome is named after Sergei Korsakov (Korsakoff), the Russian neurologist who first described it in the late 19th century.

Amnesia also occurs in Alzheimer’s disease, a condition in which the neurons in the brain gradually degenerate, hindering brain function. Damage to the hippocampus and frontal lobes impairs memory. Many other types of organic amnesias exist. For example, in large doses, most depressant drugs can cause acute loss of memory. With severe alcohol or marijuana intoxication, people often forget events that occurred while under influence of the drug.

Infantile amnesia, also called childhood amnesia, refers to the fact that people can remember very little about the first few years of their life. Surveys have shown that most people report their earliest memory to be between their third and fourth birthdays. Furthermore, people’s memories of childhood generally do not become a continuous narrative until after about seven years of age.

Psychologists do not know what causes infantile amnesia, but they have several theories. One view is that brain structures critical to memory are too immature during the first few years of life to record long-term memories. Another theory is that children cannot remember events that occurred before they mastered language. In this view, language provides a system of symbolic representation by which people develop narrative stories of their lives. Such a narrative framework may be necessary for people to remember autobiographical events in a coherent context.

The phenomenon of infantile amnesia does not mean that infants and young children cannot learn. After all, babies learn to stand, walk, and talk. Scientific evidence indicates that even young infants can learn and retain information well. For example, one experiment found that three-month-old babies could learn that kicking their legs moves a mobile over their crib. Up to a month later, the babies could still demonstrate their knowledge that kicking moved the mobile. Infants and toddlers seem to retain implicit memories of their experiences.

All people differ somewhat in their ability to remember information. However, some individuals have remarkable memories and perform feats that normal individuals could never hope to achieve. These individuals, sometimes called mnemonists (pronounced ‘nih-MAHN-ists’), are considered to have exceptional memory.

Psychologists have described several cases of exceptional memory. Aleksandr R. Luria, a Russian neuropsychologist, described one of the most famous cases in his book The Mind of a Mnemonist (1968). Luria recounted the abilities of S. V. Shereshevskii, a man he called S. Luria studied Shereshevskii over many years and watched him perform remarkable memory feats. However, until Luria began studying these feats, Shereshevskii was unaware of how extraordinary his talents were. For example, Shereshevskii could study a blackboard full of nonsense material and then reproduce it at will years later. He could also memorize long lists of nonsense syllables, extremely complex scientific formulas, and numbers more than 100 digits long. In each case, Shereshevskii could recall the information flawlessly, even if asked to produce it in reverse order. Luria reported one instance in which Shereshevskii was able to recall a 50-word list when the test was given without warning 15 years after presentation of the list! He recalled all 50 words without a single error.

The primary technique Shereshevskii used was mental imagery. He generated very rich mental images to represent information. In addition, part of his ability might have been due to his astonishing capacity for synesthesia. Synesthesia occurs when information coming into one sensory modality, such as a sound, evokes a sensation in another sensory modality, such as a sight, taste, smell, feel, or touch. All people have synesthesia to a slight degree. For example, certain colors may ‘feel’ warm or cool. However, Shereshevskii’s synesthesia was extremely vivid and unusual. For example, Shereshevskii once told a colleague of Luria’s, ‘What a crumbly yellow voice you have.’ He also associated numbers with shapes, colors, and even people. Synesthetic reactions probably improved Shereshevskii’s memory because he could encode events in a very elaborate way. But they often caused him confusion, too. For example, reading was difficult because each word in a sentence evoked its own mental image, interfering with comprehension of the sentence as a whole.

A second case of exceptional memory illustrates the talent some people display for remembering certain types of material. In a series of tests in the 1980s and 1990s, Rajan Srinavasen Mahadevan (known as Rajan) demonstrated a remarkable talent for remembering numbers, but for other types of material, his memory ability tested in the normal range. Rajan memorized the mathematical ratio pi, which begins 3.14159 and continues indefinitely with no known pattern, to nearly 32,000 decimal places! If given a string of digits, within a few seconds he could accurately say whether or not the string appears in the first 32,000 digits of pi. He could also rapidly identify any of the first 10,000 digits of pi when given a specific decimal place. For example, he could tell what digit is in decimal place 6,243 in about 12 seconds, and he rarely made errors on this task. Rajan demonstrated great skill at learning new numerical information.

Shereshevskii and Rajan scored in the normal range on standard intelligence tests. Another group of people, those with savant syndrome (formerly called idiot savants), usually score low on intelligence tests but have one ‘island’ of outstanding cognitive ability. Many children and adults who are deemed savants have extraordinary memory. Psychologists have studied many cases of savant syndrome, but its nature remains a mystery.

Cases of exceptional memory stand as remarkable puzzles whose implications for normal memory functioning are unclear. In some cases the remarkable talents exemplify techniques (such as mental imagery) that are known to magnify normal memory ability. These striking cases have not been integrated well into the scientific study of memory, but generally stand apart as curiosities that cannot yet be explained in any meaningful way.

Memory improvement techniques are called mnemonic devices or simply mnemonics. Mnemonics have been used since the time of the ancient Greeks and Romans. In ancient times, before writing was easily accomplished, educated people were trained in the art of memorizing. For example, orators had to remember points they wished to make in long speeches. Many of the techniques developed thousands of years ago are still used today. Modern research has allowed psychologists to better understand and refine the techniques.

All mnemonic devices depend upon two basic principles discussed earlier in this article: (1) recoding of information into forms that are easy to remember, and (2) supplying oneself with excellent retrieval cues to recall the information when it is needed. For example, many schoolchildren learn the colors of the visible spectrum by learning the imaginary name ROY G. BIV, which stands for red, orange, yellow, green, blue, indigo, violet. Similarly, to remember the names of the Great Lakes, remember HOMES (Huron, Ontario, Michigan, Erie, and Superior). Both of these examples illustrate the principle of recoding. Several bits of information are repackaged into an acronym that is easier to remember. The letters of the acronym serve as retrieval cues that enable recall of the desired information.

Psychologists and others have devised much more elaborate recoding and decoding schemes. Three of the most common mnemonic techniques are the method of loci, the pegword method, and the PQ4R method. Research has shown that mnemonic devices such as these permit greater recall than do strategies that people usually use, such as ordinary rehearsal (repeating information to oneself).

Neurotransmitter are chemically made by neurons, or nerve cells. Neurons send out neurotransmitters as chemical signals to activate or inhibit the function of neighboring cells.

Within the central nervous system, which consists of the brain and the spinal cord, neurotransmitters pass from neuron to neuron. In the peripheral nervous system, which is made up of the nerves that run from the central nervous system to the rest of the body, the chemical signals pass between a neuron and an adjacent muscle or gland cell.

Nine chemical compounds - belonging to three chemical families - are widely recognized as neurotransmitters. In addition, certain other body chemicals, including adenosine, histamine, enkephalins, endorphins, and epinephrine, have neurotransmitterlike properties. Experts believe that there are many more neurotransmitters as yet undiscovered.

The first of the three families is composed of amines, a group of compounds containing molecules of carbon, hydrogen, and nitrogen. Among the amine neurotransmitters are acetylcholine, norepinephrine, dopamine, and serotonin. Acetylcholine is the most widely used neurotransmitter in the body, and neurons that leave the central nervous system (for example, those running to skeletal muscle) use acetylcholine as their neurotransmitter; neurons that run to the heart, blood vessels, and other organs may use acetylcholine or norepinephrine. Dopamine is involved in the movement of muscles, and it controls the secretion of the pituitary hormone prolactin, which triggers milk production in nursing mothers.

The second neurotransmitter family is composed of amino acids, organic compounds containing both an amino group (NH2) and a carboxylic acid group (COOH). Amino acids that serve as neurotransmitters include glycine, glutamic and aspartic acids, and gamma-amino butyric acid (GABA). Glutamic acid and GABA are the most abundant neurotransmitters within the central nervous system, and especially in the cerebral cortex, which is largely responsible for such higher brain functions as thought and interpreting sensations.

The third neurotransmitter family is composed of peptides, which are compounds that contain at least 2, and sometimes as many as 100 amino acids. Peptide neurotransmitters are poorly understood, but scientists know that the peptide neurotransmitter called substance P influences the sensation of pain.

In general, each neuron uses only a single compound as its neurotransmitter. However, some neurons outside the central nervous system are able to release both an amine and a peptide neurotransmitter.

Neurotransmitters are manufactured from precursor compounds like amino acids, glucose, and the dietary amine called choline. Neurons modify the structure of these precursor compounds in a series of reactions with enzymes. Neurotransmitters that come from amino acids include serotonin, which is derived from tryptophan; dopamine and norepinephrine, which are derived from tyrosine; and glycine, which is derived from threonine. Among the neurotransmitters made from glucose are glutamate, aspartate, and GABA. Choline serves as the precursor for acetylcholine.

Neurotransmitters are released into a microscopic gap, called a synapse, that separates the transmitting neuron from the cell receiving the chemical signal. The cell that generates the signal is called the presynaptic cell, while the receiving cell is termed the postsynaptic cell.

After their release into the synapse, neurotransmitters combine chemically with highly specific protein molecules, termed receptors, that are embedded in the surface membranes of the postsynaptic cell. When this combination occurs, the voltage, or electrical force, of the postsynaptic cell is either increased (excitation) or decreased (inhibited).

When a neuron is in its resting state, its voltage is about -70 millivolts. An excitatory neurotransmitter alters the membrane of the postsynaptic neuron, making it possible for ions (electrically charged molecules) to move back and forth across the neuron’s membranes. This flow of ions makes the neuron’s voltage rise toward zero. If enough excitatory receptors have been activated, the postsynaptic neuron responds by firing, generating a nerve impulse that causes its own neurotransmitter to be released into the next synapse. An inhibitory neurotransmitter causes different ions to pass back and forth across the postsynaptic neuron’s membrane, lowering the nerve cell’s voltage to -80 or -90 millivolts. The drop in voltage makes it less likely that the postsynaptic cell will fire.

If the postsynaptic cell is a muscle cell rather than a neuron, an excitatory neurotransmitter will cause the muscle to contract. If the postsynaptic cell is a gland cell, an excitatory neurotransmitter will cause the cell to secrete its contents.

While most neurotransmitters interact with their receptors to create new electrical nerve impulses that energize or inhibit the adjoining cell, some neurotransmitter interactions do not generate or suppress nerve impulses. Instead, they interact with a second type of receptor that changes the internal chemistry of the postsynaptic cell by either causing or blocking the formation of chemicals called second messenger molecules. These second messengers regulate the postsynaptic cell’s biochemical processes and enable it to conduct the maintenance necessary to continue synthesizing neurotransmitters and conducting nerve impulses. Examples of second messengers, which are formed and entirely contained within the postsynaptic cell, include cyclic adenosine monophosphate, diacylglycerol, and inositol phosphates.

Once neurotransmitters have been secreted into synapses and have passed on their chemical signals, the presynaptic neuron clears the synapse of neurotransmitter molecules. For example, acetylcholine is broken down by the enzyme acetylcholinesterase into choline and acetate. Neurotransmitters like dopamine, serotonin, and GABA are removed by a physical process called reuptake. In reuptake, a protein in the presynaptic membrane acts as a sort of sponge, causing the neurotransmitters to reenter the presynaptic neuron, where they can be broken down by enzymes or repackaged for reuse.

Neurotransmitters are known to be involved in a number of disorders, including Alzheimer’s disease. Victims of Alzheimer’s disease suffer from loss of intellectual capacity, disintegration of personality, mental confusion, hallucinations, and aggressive - even violent - behavior. These symptoms are the result of progressive degeneration in many types of neurons in the brain. Forgetfulness, one of the earliest symptoms of Alzheimer’s disease, is partly caused by the destruction of neurons that normally release the neurotransmitter acetylcholine. Medications that increase brain levels of acetylcholine have helped restore short-term memory and reduce mood swings in some Alzheimer’s patients.

Neurotransmitters also play a role in Parkinson disease, which slowly attacks the nervous system, causing symptoms that worsen over time. Fatigue, mental confusion, a masklike facial expression, stooping posture, shuffling gait, and problems with eating and speaking are among the difficulties suffered by Parkinson victims. These symptoms have been partly linked to the deterioration and eventual death of neurons that run from the base of the brain to the basal ganglia, a collection of nerve cells that manufacture the neurotransmitter dopamine. The reasons why such neurons die are yet to be understood, but the related symptoms can be alleviated. L-dopa, or levodopa, widely used to treat Parkinson disease, acts as a supplementary precursor for dopamine. It causes the surviving neurons in the basal ganglia to increase their production of dopamine, thereby compensating to some extent for the disabled neurons.

Many other effective drugs have been shown to act by influencing neurotransmitter behavior. Some drugs work by interfering with the interactions between neurotransmitters and intestinal receptors. For example, belladonna decreases intestinal cramps in such disorders as irritable bowel syndrome by blocking acetylcholine from combining with receptors. This process reduces nerve signals to the bowel wall, which prevents painful spasms.

Other drugs block the reuptake process. One well-known example is the drug fluoxetine (Prozac), which blocks the reuptake of serotonin. Serotonin then remains in the synapse for a longer time, and its ability to act as a signal is prolonged, which contributes to the relief of depression and the control of obsessive-compulsive behaviors.

Alzheimer’s Disease, or a progressive brain disorder that causes a gradual and irreversible decline in memory, language skills, perception of time and space, and, eventually, the ability to care for oneself. First described by German psychiatrist Alois Alzheimer in 1906, Alzheimer’s disease was initially thought to be a rare condition affecting only young people, and was referred to as presenile dementia. Today late-onset Alzheimer’s disease is recognized as the most common cause of the loss of mental function in those aged 65 and over. Alzheimer’s in people in their 30s, 40s, and 50s, called early-onset Alzheimer’s disease, occurs much less frequently, accounting for less than 10 percent of the estimated 4 million Alzheimer’s cases, in the United States.

Although Alzheimer’s disease is not a normal part of the aging process, the risk of developing the disease increases as people grow older. About 10 percent of the United States population over the age of 65 is affected by Alzheimer’s disease, and nearly 50 percent of those over age 85 may have the disease.

Alzheimer’s disease takes a devastating toll, not only on the patients, but also on those who love and care for them. Some patients experience immense fear and frustration as they struggle with once commonplace tasks and slowly lose their independence. Family, friends, and especially those who provide daily care suffer immeasurable pain and stress as they witness Alzheimer’s disease slowly take their loved one from them.

The onset of Alzheimer’s disease is usually very gradual. In the early stages, Alzheimer’s patients have relatively mild problems learning new information and remembering where they have left common objects, such as keys or a wallet. In time, they begin to have trouble recollecting recent events and finding the right words to express themselves. As the disease progresses, patients may have difficulty remembering what day or month it is, or finding their way around familiar surroundings. They may develop a tendency to wander off and then be unable to find their way back. Patients often become irritable or withdrawn as they struggle with fear and frustration when once commonplace tasks become unfamiliar and intimidating. Behavioral changes may become more pronounced as patients become paranoid or delusional and unable to engage in normal conversation.

Eventually Alzheimer’s patients become completely incapacitated and unable to take care of their most basic life functions, such as eating and using the bathroom. Alzheimer’s patients may live many years with the disease, usually dying from other disorders that may develop, such as pneumonia. Typically the time from initial diagnosis until death is seven to ten years, but this is quite variable and can range from three to twenty years, depending on the age of onset, other medical conditions present, and the care patients receive.

The brains of patients with Alzheimer’s have distinctive formations - abnormally shaped proteins called tangles and plaques - that are recognized as the hallmark of the disease. Not all brain regions show these characteristic formations. The areas most prominently affected are those related to memory.

Tangles are long, slender tendrils found inside nerve cells, or neurons. Scientists have learned that when a protein called tau becomes altered, it may cause the characteristic tangles in the brain of an Alzheimer’s patient. In healthy brains, tau provides structural support for neurons, but in Alzheimer’s patients this structural support collapses.

Plaques, or clumps of fibers, form outside the neurons in the adjacent brain tissue. Scientists found that a type of protein, called amyloid precursor protein, forms toxic plaques when it is cut in two places. Researchers have isolated the enzyme beta-secretase, which is believed to make one of the cuts in the amyloid precursor protein. Researchers also identified another enzyme, called gamma secretase, that makes the second cut in the amyloid precursor protein. These two enzymes snip the amyloid precursor protein into fragments that then accumulate to form plaques that are toxic to neurons.

Scientists have found that tangles and plaques cause neurons in the brains of Alzheimer’s patients to shrink and eventually die, first in the memory and language centers and finally throughout the brain. This widespread neuron degeneration leaves gaps in the brain’s messaging network that may interfere with communication between cells, causing some of the symptoms of Alzheimer’s disease.

Alzheimer’s patients have lower levels of neurotransmitters, chemicals that carry complex messages back and forth between the nerve cells. For instance, Alzheimer’s disease seems to decrease the level of the neurotransmitter acetylcholine, which is known to influence memory. A deficiency in other neurotransmitters, including somatostatin and corticotropin-releasing factor, and, particularly in younger patients, serotonin and norepinephrine, also interferes with normal communication between brain cells.

The causes of Alzheimer’s disease remain a mystery, but researchers have found that particular groups of people have risk factors that make them more likely to develop the disease than the general population. For example, people with a family history of Alzheimer’s are more likely to develop Alzheimer’s disease.

Some of the most promising Alzheimer’s research is being conducted in the field of genetics to learn the role a family history of the disease has in its development. Scientists have learned that people who are carriers of a specific version of the apolipoprotein E gene (apoE gene), found on chromosome 19, are several times more likely to develop Alzheimer’s than carriers of other versions of the apoE gene. The most common version of this gene in the general population is apoE3. Nearly half of all late-onset Alzheimer’s patients have the less common apoE4 version, however, and research has shown that this gene plays a role in Alzheimer’s disease. Scientists have also found evidence that variations in one or more genes located on chromosomes 1, 10, and 14 may increase a person’s risk for Alzheimer’s disease. Scientists have identified the gene variations on chromosomes 1 and 14 and learned that these genes produce mutations in proteins called presenilins. These mutated proteins apparently trigger the activity of the enzyme gamma secretase, which splices the amyloid precursor protein.

Researchers have made similar strides in the investigation of early-onset Alzheimer’s disease. A series of genetic mutations in patients with early-onset Alzheimer’s has been linked to the production of amyloid precursor protein, the protein in plaques that may be implicated in the destruction of neurons. One mutation is particularly interesting to geneticists because it occurs on a gene involved in the genetic disorder Down syndrome. People with Down syndrome usually develop plaques and tangles in their brains as they get older, and researchers believe that learning more about the similarities between Down syndrome and Alzheimer’s may further our understanding of the genetic elements of the disease.

Some studies suggest that one or more factors other than heredity may determine whether people develop the disease. One study published in February 2001 compared residents of Ibadan, Nigeria, who eat a mostly low-fat vegetarian diet, with African Americans living in Indianapolis, Indiana, whose diet included a variety of high-fat foods. The Nigerians were less likely to develop Alzheimer’s disease compared to their U.S. counterparts. Some researchers suspect that health problems such as high blood pressure, atherosclerosis (arteries clogged by fatty deposits), high cholesterol levels, or other cardiovascular problems may play a role in the development of the disease.

Other studies have suggested that environmental agents may be a possible cause of Alzheimer’s disease; for example, one study suggested that high levels of aluminum in the brain may be a risk factor. Several scientists initiated research projects to further investigate this connection, but no conclusive evidence has been found linking aluminum with Alzheimer’s disease. Similarly, investigations into other potential environmental causes, such as zinc exposure, viral agents, and food-borne poisons, while initially promising, have generally turned up inconclusive results.

Some studies indicate that brain trauma can trigger a degenerative process that results in Alzheimer’s disease. In one study, an analysis of the medical records of veterans of World War II (1939-1945) linked serious head injury in early adulthood with Alzheimer’s disease in later life. The study also looked at other factors that could possibly influence the development of the disease among the veterans, such as the presence of the apoE gene, but no other factors were identified.

Alzheimer’s disease is only positively diagnosed by examining brain tissue under a microscope to see the hallmark plaques and tangles, and this is only possible after a patient dies. As a result, physicians rely on a series of other techniques to diagnose probable Alzheimer’s disease in living patients. Diagnosis begins by ruling out other problems that cause memory loss, such as stroke, depression, alcoholism, and the use of certain prescription drugs. The patient undergoes a thorough examination, including specialized brain scans, to eliminate other disorders. The patient may be given a detailed evaluation called a neuropsychological examination, which is designed to evaluate a patient’s ability to perform specific mental tasks. This helps the physician determine whether the patient is showing the characteristic symptoms of Alzheimer’s disease - progressively worsening memory problems, language difficulties, and trouble with spatial direction and time. The physician also asks about the patient’s family medical history to learn about any past serious illnesses, which may give a hint about the patient’s current symptoms.

There is no known cure for Alzheimer’s disease, and treatment focuses on lessening symptoms and attempting to slow the course of the disease. Drugs that increase or improve the function of brain acetylcholine, the neurotransmitter that affects memory, have been approved by the United States Food and Drug Administration (FDA) for the treatment of Alzheimer’s disease. Called acetylcholinesterase inhibitors, these drugs have had modest but clearly positive effects on the symptoms of the disease. These drugs can benefit patients at all stages of illness, but they are particularly effective in the middle stage. This finding corresponds with new evidence that low acetylcholine levels in patients with Alzheimer’s disease may not be present in the earliest stage of the illness.

Evidence shows that there is inflammation in the brains of Alzheimer’s patients, which may be associated with the production of amyloid precursor protein. Studies are underway to find drugs that prevent this inflammation, to possibly slow or even halt the progress of the disease. Other promising approaches center on mechanisms that manipulate amyloid precursor protein production or accumulation. Drugs are in development that may block the activity of the enzymes that cut the amyloid precursor protein, halting amyloid production. Other studies in mice suggest that vaccinating animals with amyloid precursor protein can produce a reaction that clears amyloid precursor protein from the brain. Physicians have started vaccination studies in humans to determine if the same potentially beneficial effects can be obtained. There is still much to be learned, but as scientists better understand the genetic components of Alzheimer’s, the roles of the amyloid precursor protein and the tau protein in the disease, and the mechanisms of nerve cell degeneration, the possibility that a treatment will be developed is more likely.

Neurophysiology may be considered as the study of how nerve cells, or neurons, receive and transmit information. Two types of phenomena are involved in processing nerve signals: electrical and chemical. Electrical events propagate a signal within a neuron, and chemical processes transmit the signal from one neuron to another neuron or to a muscle cell.

A neuron is a long cell that has a thick central area containing the nucleus; it also has one long process called an axon and one or more short, bushy processes called dendrites. Dendrites receive impulses from other neurons. (The exceptions are sensory neurons, such as those that transmit information about temperature or touch, in which the signal is generated by specialized receptors in the skin.) These impulses are propagated electrically along the cell membrane to the end of the axon. At the tip of the axon the signal is chemically transmitted to an adjacent neuron or muscle cell.

Like all other cells, neurons contain charged ions: potassium and sodium (positively charged) and chlorine (negatively charged). Neurons differ from other cells in that they are able to produce a nerve impulse. A neuron is polarized - that is, it has an overall negative charge inside the cell membrane because of the high concentration of chlorine ions and low concentration of potassium and sodium ions. The concentration of these same ions is exactly reversed outside the cell. This charge differential represents stored electrical energy, sometimes referred to as membrane potential or resting potential. The negative charge inside the cell is maintained by two features. The first is the selective permeability of the cell membrane, which is more permeable to potassium than sodium. The second feature is sodium pumps within the cell membrane that actively pump sodium out of the cell. When depolarization occurs, this charge differential across the membrane is reversed, and a nerve impulse is produced.

Depolarization is a rapid change in the permeability of the cell membrane. When sensory input or any other kind of stimulating current is received by the neuron, the membrane permeability is changed, allowing a sudden influx of sodium ions into the cell. The high concentration of sodium, or action potential, changes the overall charge within the cell from negative to positive. The local change in ion concentration triggers similar reactions along the membrane, propagating the nerve impulse. After a brief period called the refractory period, during which the ionic concentration returns to resting potential, the neuron can repeat this process.

Nerve impulses travel at different speeds, depending on the cellular composition of a neuron. Where speed of impulse is important, as in the nervous system, axons are insulated with a membranous substance called myelin. The insulation provided by myelin maintains the ionic charge over long distances. Nerve impulses are propagated at specific points along the myelin sheath; these points are called the nodes of Ranvier. Examples of myelinated axons are those in sensory nerve fibers and nerves connected to skeletal muscles. In non-myelinated cells, the nerve impulse is propagated more diffusely.

When the electrical signal reaches the tip of an axon, it stimulates small presynaptic vesicles in the cell. These vesicles contain chemicals called neurotransmitters, which are released into the microscopic space between neurons (the synaptic cleft). The neurotransmitters attach to specialized receptors on the surface of the adjacent neuron. This stimulus causes the adjacent cell to depolarize and propagate an action potential of its own. The duration of a stimulus from a neurotransmitter is limited by the breakdown of the chemicals in the synaptic cleft and the reuptake by the neuron that produced them. Formerly, each neuron was thought to make only one transmitter, but recent studies have shown that some cells make two or more.

The Synapse can elementarily be define in the signals conveying everything that human beings sense and think, and every motion they make, follow nerve pathways in the human body as waves of ions (atoms or groups of atoms that carry electric charges). Australian physiologist Sir John Eccles discovered many of the intricacies of this electrochemical signaling process, particularly the pivotal step in which a signal is conveyed from one nerve cell to another.

How does one nerve cell transmit the nerve impulse to another cell? Electron microscopy and other methods show that it does so by means of special extensions that deliver a squirt of transmitter substance

The human brain is the most highly organized form of matter known, and in complexity the brains of the other higher animals are not greatly inferior. For certain purposes it is expedient to regard the brain as being analogous to a machine. Even if it is so regarded, however, it is a machine of a totally different kind from those made by man. In trying to understand the workings of his own brain man meets his highest challenge. Nothing is given; there are no operating diagrams, no maker's instructions.

The first step in trying to understand the brain is to examine its structure in order to discover the components from which it is built and how they are related to one another. After that one can attempt to understand the mode of operation of the simplest components. These two modes of investigation - the morphological and the physiological - have now become complementary. In studying the nervous system with today's sensitive electrical devices, however, it is all too easy to find physiological events that cannot be correlated with any known anatomical structure. Conversely, the electron microscope reveals many structural details whose physiological significance is obscure or unknown.

At the close of the past century the Spanish anatomist Santiago Ramón Cajal showed how all parts of the nervous system are built up of individual nerve cells of many different shapes and sizes. Like other cells, each nerve cell has a nucleus and a surrounding cytoplasm. Its outer surface consists of numerous fine branches - the dendrites - that receive nerve impulses from other nerve cells, and one relatively long branch - the axon - that transmits nerve impulses. Near its end the axon divides into branches that terminate at the dendrites or bodies of other nerve cells. The axon can be as short as a fraction of a millimeter or as long as a meter, depending on its place and function. It has many of the properties of an electric cable and is uniquely specialized to conduct the brief electrical waves called nerve impulses. In very thin axons these impulses travel at less than one meter per second; in others, for example in the large axons of the nerve cells that activate muscles, they travel as fast as 100 meters per second.

The electrical impulse that travels along the axon ceases abruptly when it comes to the point where the axon's terminal fibers make contact with another nerve cell. These junction points were given the name ‘synapses’ by Sir Charles Sherrington, who laid the foundations of what is sometimes called synaptology. If the nerve impulse is to continue beyond the synapse, it must be regenerated afresh on the other side. As recently as 15 years ago some physiologists held that transmission at the synapse was predominantly, if not exclusively, an electrical phenomenon. Now, however, there is abundant evidence that transmission is effectuated by the release of specific chemical substances that trigger a regeneration of the impulse. In fact, the first strong evidence showing that a transmitter substance acts across the synapse was provided more than 40 years ago by Sir Henry Dale and Otto Loewi.

It has been estimated that the human central nervous system, which of course includes the spinal cord as well as the brain itself, consists of about 10 billion (1010) nerve cells. With rare exceptions each nerve cell receives information directly in the form of impulses from many other nerve cells - often hundreds - and transmits information to a like number. Depending on its threshold of response, a given nerve cell may fire an impulse when stimulated by only a few incoming fibers or it may not fire until stimulated by many incoming fibers. It has long been known that this threshold can be raised or lowered by various factors. Moreover, it was conjectured some 60 years ago that some of the incoming fibers must inhibit the firing of the receiving cell rather than excite it. The conjecture was subsequently confirmed, and the mechanism of the inhibitory effect has now been clarified. This mechanism and its equally fundamental counterpart - nerve-cell excitation.

At the level of anatomy there are some clues to indicate how the fine axon terminals impinging on a nerve cell can make the cell regenerate a nerve impulse of its own . . . a nerve cell and its dendrites are covered by fine branches of nerve fibers that terminate in knoblike structures. These structures are the synapses.

The electron microscope has revealed structural details of synapses that fit in nicely with the view that a chemical transmitter is involved in nerve transmission. Enclosed in the synaptic knob are many vesicles, or tiny sacs, which appear to contain the transmitter substances that induce synaptic transmission. Between the synaptic knob and the synaptic membrane of the adjoining nerve cell is a remarkably uniform space of about 20 millimicrons that is termed the synaptic cleft. Many of the synaptic vesicles are concentrated adjacent to this cleft; it seems plausible that the transmitter substance is discharged from the nearest vesicles into the cleft, where it can act on the adjacent cell membrane. This hypothesis is supported by the discovery that the transmitter is released in packets of a few thousand molecules.

The study of synaptic transmission was revolutionized in 1951 by the introduction of delicate techniques for recording electrically from the interior of single nerve cells. This is done by inserting into the nerve cell an extremely fine glass pipette with a diameter of .5 micron - about a fifty-thousandth of an inch. The pipette is filled with an electrically conducting salt solution such as concentrated potassium chloride. If the pipette is carefully inserted and held rigidly in place, the cell membrane appears to seal quickly around the glass, thus preventing the flow of a short-circuiting current through the puncture in the cell membrane. Impaled in this fashion, nerve cells can function normally for hours. Although there is no way of observing the cells during the insertion of the pipette, the insertion can be guided by using as clues the electric signals that the pipette picks up when close to active nerve cells.

When the nerve cell responds to the chemical synaptic transmitter, the response depends in part on characteristic features of ionic composition that are also concerned with the transmission of impulses in the cell and along its axon. When the nerve cell is at rest, its physiological makeup resembles that of most other cells in that the water solution inside the cell is quite different in composition from the solution in which the cell is bathed. The nerve cell is able to exploit this difference between external and internal composition and use it in quite different ways for generating an electrical impulse and for synaptic transmission.

The composition of the external solution is well established because the solution is essentially the same as blood from which cells and proteins have been removed. The composition of the internal solution is known only approximately. Indirect evidence indicates that the concentrations of sodium and chloride ions outside the cell are respectively some 10 and 14 times higher than the concentrations inside the cell. In contrast, the concentration of potassium ions inside the cell is about 30 times higher than the concentration outside.

How can one account for this remarkable state of affairs? Part of the explanation is that the inside of the cell is negatively charged with respect to the outside of the cell by about 70 millivolts. Since like charges repel each other, this internal negative charge tends to drive chloride ions (Cl-) outward through the cell membrane and, at the same time, to impede their inward movement. In fact, a potential difference of 70 millivolts is just sufficient to maintain the observed disparity in the concentration of chloride ions inside the cell and outside it; chloride ions diffuse inward and outward at equal rates. A drop of 70 millivolts across the membrane therefore defines the ‘equilibrium potential’ for chloride ions.

To obtain a concentration of potassium ions (K+) that is 30 times higher inside the cell than outside would require that the interior of the cell membrane be about 90 millivolts negative with respect to the exterior. Since the actual interior is only 70 millivolts negative, it falls short of the equilibrium potential for potassium ions by 20 millivolts. Evidently the thirtyfold concentration can be achieved and maintained only if there is some auxiliary mechanism for ‘pumping’ potassium ions into the cell at a rate equal to their spontaneous net outward diffusion. The pumping mechanism has the still more difficult task of pumping sodium ions (Na+) out of the cell against a potential gradient of 130 millivolts. This figure is obtained by adding the 70 millivolts of internal negative charge to the equilibrium potential for sodium ions, which is 60 millivolts of internal positive charge. If it were not for this postulated pump, the concentration of sodium ions inside and outside the cell would be almost the reverse of what is observed.

In their classic studies of nerve-impulse transmission in the giant axon of the squid, A. L. Hodgkin, A. F. Huxley and Bernhard Katz of Britain demonstrated that the propagation of the impulse coincides with abrupt changes in the permeability of the axon membrane. When a nerve impulse has been triggered in some way, what can be described as a gate opens and lets sodium ions pour into the axon during the advance of the impulse, making the interior of the axon locally positive. The process is self-reinforcing in that the flow of some sodium ions through the membrane opens the gate further and makes it easier for others to follow. The sharp reversal of the internal polarity of the membrane constitutes the nerve impulse, which moves like a wave until it has traveled the length of the axon. In the wake of the impulse the sodium gate closes and a potassium gate opens, thereby restoring the normal polarity of the membrane within a millisecond or less.

With this understanding of the nerve impulse in hand, one is ready to follow the electrical events at the excitatory synapse. One might guess that if the nerve impulse results from an abrupt inflow of sodium ions and a rapid change in the electrical polarity of the axon's interior, something similar must happen at the body and dendrites of the nerve cell in order to generate the impulse in the first place. Indeed, the function of the excitatory synaptic terminals on the cell body and its dendrites is to depolarize the interior of the cell membrane essentially by permitting an inflow of sodium ions. When the depolarization reaches a threshold value, a nerve impulse is triggered.

As a simple instance of this phenomenon we have recorded the depolarization that occurs in a single motoneuron activated directly by the large nerve fibers that enter the spinal cord from special stretch-receptors known as annulospiral endings. These receptors in turn are located in the same muscle that is activated by the motoneuron under study. Thus the whole system forms a typical reflex arc, such as the arc responsible for the patellar reflex, or ‘knee jerk.’

To conduct the experiment we anesthetize an animal (most often a cat) and free by dissection a muscle nerve that contains these large nerve fibers. By applying a mild electric shock to the exposed nerve one can produce a single impulse in each of the fibers; since the impulses travel to the spinal cord almost synchronously they are referred to collectively as a volley. The number of impulses contained in the volley can be reduced by reducing the stimulation applied to the nerve. The volley strength is measured at a point just outside the spinal cord and is displayed on an oscilloscope. About half a millisecond after detection of a volley there is a wavelike change in the voltage inside the motoneuron that has received the volley. The change is detected by a microelectrode inserted in the motoneuron and is displayed on another oscilloscope.

What we find is that the negative voltage inside the cell becomes progressively less negative as more of the fibers impinging on the cell are stimulated to fire. This observed depolarization is in fact a simple summation of the depolarizations produced by each individual synapse. When the depolarization of the interior of the motoneuron reaches a critical point, a ‘spike’ suddenly appears on the second oscilloscope, showing that a nerve impulse has been generated. During the spike the voltage inside the cell changes from about 70 millivolts negative to as much as 30 millivolts positive. The spike regularly appears when the depolarization, or reduction of membrane potential, reaches a critical level, which is usually between 10 and 18 millivolts. The only effect of a further strengthening of the synaptic stimulus is to shorten the time needed for the motoneuron to reach the firing threshold. The depolarizing potentials produced in the cell membrane by excitatory synapses are called excitatory postsynaptic potentials, or EPSP's.

Through one barrel of a double-barreled microelectrode one can apply a background current to change the resting potential of the interior of the cell membrane, either increasing it or decreasing it. When the potential is made more negative, the EPSP rises more steeply to an earlier peak. When the potential is made less negative, the EPSP rises more slowly to a lower peak. Finally, when the charge inside the cell is reversed so as to be positive with respect to the exterior, the excitatory synapses give rise to an EPSP that is actually the reverse of the normal one.

These observations support the hypothesis that excitatory synapses produce what amounts virtually to a short circuit in the synaptic membrane potential. When this occurs, the membrane no longer acts as a barrier to the passage of ions but lets them flow through in response to the differing electric potential on the two sides of the membrane. In other words, the ions are momentarily allowed to travel freely down their electrochemical gradients, which means that sodium ions flow into the cell and, to a lesser degree, potassium ions flow out. It is this net flow of positive ions that creates the excitatory postsynaptic potential. The flow of negative ions, such as the chloride ion, is apparently not involved. By artificially altering the potential inside the cell one can establish that there is no flow of ions, and therefore no EPSP, when the voltage drop across the membrane is zero.

How is the synaptic membrane converted from a strong ionic barrier into an ion-permeable state? It is currently accepted that the agency of conversion is the chemical transmitter substance contained in the vesicles inside the synaptic knob. When a nerve impulse reaches the synaptic knob, some of the vesicles are caused to eject the transmitter substance into the synaptic cleft. The molecules of the substance would take only a few microseconds to diffuse across the cleft and become attached to specific receptor sites on the surface membrane of the adjacent nerve cell.

Presumably the receptor sites are associated with fine channels in the membrane that are opened in some way by the attachment of the transmitter-substance molecules to the receptor sites. With the channels thus opened, sodium and potassium ions flow through the membrane thousands of times more readily than they normally do, thereby producing the intense ionic flux that depolarizes the cell membrane and produces the EPSP. In many synapses the current flows strongly for only about a millisecond before the transmitter substance is eliminated from the synaptic cleft, either by diffusion into the surrounding regions or as a result of being destroyed by enzymes. The latter process is known to occur when the transmitter substance is acetylcholine, which is destroyed by the enzyme acetylcholinesterase.

The substantiation of this general picture of synaptic transmission requires the solution of many fundamental problems. Since we do not know the specific transmitter substance for the vast majority of synapses in the nervous system we do not know if there are many different substances or only a few. The only one identified with reasonable certainty in the mammalian central nervous system is acetylcholine. We know practically nothing about the mechanism by which a presynaptic nerve impulse causes the transmitter substance to be injected into the synaptic cleft. Nor do we know how the synaptic vesicles not immediately adjacent to the synaptic cleft are moved up to the firing line to replace the emptied vesicles. It is conjectured that the vesicles contain the enzyme systems needed to recharge themselves. The entire process must be swift and efficient: the total amount of transmitter substance in synaptic terminals is enough for only a few minutes of synaptic activity at normal operating rates. There are also knotty problems to be solved on the other side of the synaptic cleft. What, for example, is the nature of the receptor sites? How are the ionic channels in the membrane opened up?

Let us turn now to the second type of synapse that has been identified in the nervous system. These are the synapses that can inhibit the firing of a nerve cell even though it may be receiving a volley of excitatory impulses. When inhibitory synapses are examined in the electron microscope, they look very much like excitatory synapses. (There are probably some subtle differences, but they need not concern us here.) Microelectrode recordings of the activity of single motoneurons and other nerve cells have now shown that the inhibitory postsynaptic potential (IPSP) is virtually a mirror image of the EPSP. Moreover, individual inhibitory synapses, like excitatory synapses, have a cumulative effect. The chief difference is simply that the IPSP makes the cell's internal voltage more negative than it is normally, which is in a direction opposite to that needed for generating a spike discharge.

By driving the internal voltage of a nerve cell in the negative direction inhibitory synapses oppose the action of excitatory synapses, which of course drive it in the positive direction. Hence if the potential inside a resting cell is 70 millivolts negative, a strong volley of inhibitory impulses can drive the potential to 75 or 80 millivolts negative. One can easily see that if the potential is made more negative in this way the excitatory synapses find it more difficult to raise the internal voltage to the threshold point for the generation of a spike. Thus the nerve cell responds to the algebraic sum of the internal voltage changes produced by excitatory and inhibitory synapses.

If, as in the experiment described earlier, the internal membrane potential is altered by the flow of an electric current through one barrel of a double-barreled microelectrode, one can observe the effect of such changes on the inhibitory postsynaptic potential. When the internal potential is made less negative, the inhibitory postsynaptic potential is deepened. Conversely, when the potential is made more negative, the IPSP diminishes; it finally reverses when the internal potential is driven below minus 80 millivolts.

One can therefore conclude that inhibitory synapses share with excitatory synapses the ability to change the ionic permeability of the synaptic membrane. The difference is that inhibitory synapses enable ions to flow freely down an electrochemical gradient that has an equilibrium point at minus 80 millivolts rather than at zero, as is the case for excitatory synapses. This effect could be achieved by the outward flow of positively charged ions such as potassium or the inward flow of negatively charged ions such as chloride, or by a combination of negative and positive ionic flows such that the interior reaches equilibrium at minus 80 millivolts.

In an effort to discover the permeability changes associated with the inhibitory potential had altered the concentration of ions normally found in motoneurons and have introduced a variety of other ions that are not normally present. This can be done by impaling nerve cells with micropipettes that are filled with a salt solution containing the ion to be injected. The actual injection is achieved by passing a brief current through the micropipette.

If the concentration of chloride ions within the cell is in this way increased as much as three times, the inhibitory postsynaptic potential reverses and acts as a depolarizing current; that is, it resembles an excitatory potential. On the other hand, if the cell is heavily injected with sulfate ions, which are also negatively charged, there is no such reversal. This simple test shows that under the influence of the inhibitory transmitter substance, which is still unidentified, the subsynaptic membrane becomes permeable momentarily to chloride ions but not to sulfate ions. During the generation of the IPSP the outflow of chloride ions is so rapid that it more than outweighs the flow of other ions that generate the normal inhibitory potential.

The effect of injecting motoneurons with more than 30 kinds of negatively charged ion. With one exception the hydrated ions (ions bound to water) to which the cell membrane is permeable under the influence of the inhibitory transmitter substance are smaller than the hydrated ions to which the membrane is impermeable. The exception is the formate ion (HCO2-), which may have an ellipsoidal shape and so be able to pass through membrane pores that block smaller spherical ions.

Apart from the formate ion all the ions to which the membrane is permeable have a diameter not greater than 1.14 times the diameter of the potassium ion; that is, they are less than 2.9 angstrom units in diameter. Comparable investigations in other laboratories have found the same permeability effects, including the exceptional behavior of the formate ion, in fishes, toads and snails. It may well be that the ionic mechanism responsible for synaptic inhibition is the same throughout the animal kingdom.

The significance of these and other studies is that they strongly indicate that the inhibitory transmitter substance opens the membrane to the flow of potassium ions but not to sodium ions. It is known that the sodium ion is somewhat larger than any of the negatively charged ions, including the formate ion, that are able to pass through the membrane during synaptic inhibition. It is not possible, however, to test the effectiveness of potassium ions by injecting excess amounts into the cell because the excess is immediately diluted by an osmotic flow of water into the cell.

The concentration of potassium ions inside the nerve cell is about 30 times greater than the concentration outside, and to maintain this large difference in concentration without the help of a metabolic pump the inside of the membrane would have to be charged 90 millivolts negative with respect to the exterior. This implies that if the membrane were suddenly made porous to potassium ions, the resulting outflow of ions would make the inside potential of the membrane even more negative than it is in the resting state, and that is just what happens during synaptic inhibition. The membrane must not simultaneously become porous to sodium ions, because they exist in much higher concentration outside the cell than inside and their rapid inflow would more than compensate for the potassium outflow. In fact, the fundamental difference between synaptic excitation and synaptic inhibition is that the membrane freely passes sodium ions in response to the former and largely excludes the passage of sodium ions in response to the latter.

This fine discrimination between ions that are not very different in size must be explained by any hypothesis of synaptic action. It is most unlikely that the channels through the membrane are created afresh and accurately maintained for a thousandth of a second every time a burst of transmitter substance is released into the synaptic cleft. It is more likely that channels of at least two different sizes are built directly into the membrane structure. In some way the excitatory transmitter substance would selectively unplug the larger channels and permit the free inflow of sodium ions. Potassium ions would simultaneously flow out and thus would tend to counteract the large potential change that would be produced by the massive sodium inflow. The inhibitory transmitter substance would selectively unplug the smaller channels that are large enough to pass potassium and chloride ions but not sodium ions.

To explain certain types of inhibition other features must be added to this hypothesis of synaptic transmission. In the simple hypothesis chloride and potassium ions can flow freely through pores of all inhibitory synapses. It has been shown, however, that the inhibition of the contraction of heart muscle by the vagus nerve is due almost exclusively to potassium-ion flow. On the other hand, in the muscles of crustaceans and in nerve cells in the snail's brain synaptic inhibition is due largely to the flow of chloride ions. This selective permeability could be explained if there were fixed charges along the walls of the channels. If such charges were negative, they would repel negatively charged ions and prevent their passage; if they were positive, they would similarly prevent the passage of positively charged ions. One can now suggest that the channels opened by the excitatory transmitter are negatively charged and so do not permit the passage of the negatively charged chloride ion, even though it is small enough to move through the channel freely.

One might wonder if a given nerve cell can have excitatory synaptic action at some of its axon terminals and inhibitory action at others. The answer is no. Two different kinds of nerve cell are needed, one for each type of transmission and synaptic transmitter substance. This can readily be demonstrated by the effect of strychnine and tetanus toxin in the spinal cord; they specifically prevent inhibitory synaptic action and leave excitatory action unaltered. As a result the synaptic excitation of nerve cells is uncontrolled and convulsions result. The special types of cell responsible for inhibitory synaptic action are now being recognized in many parts of the central nervous system.

This account of communication between nerve cells is necessarily oversimplified, yet it shows that some significant advances are being made at the level of individual components of the nervous system. By selecting the most favorable situations we have been able to throw light on some details of nerve-cell behavior. We can be encouraged by these limited successes. But the task of understanding in a comprehensive way how the human brain operates staggers its own imagination.

The Neural Network, in computer science, is a highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples.

Neural networks were initially studied by computer and cognitive scientists in the late 1950s and early 1960s in an attempt to model sensory perception in biological organisms. Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modeling, and next-generation computing models.

Neural networks fall into two categories: artificial neural networks and biological neural networks. Artificial neural networks are modeled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called neurons that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neurons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations in a short amount of time.

Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons.

Artificial neurons, like their biological counterparts, have simple structures and are designed to mimic the function of biological neurons. The main body of an artificial neuron is called a node or unit. Artificial neurons may be physically connected to one another by wires that mimic the connections between biological neurons, if, for instance, the neurons are simple integrated circuits. However, neural networks are usually simulated on traditional computers, in which case the connections between processing nodes are not physical but are instead virtual.

Artificial neurons may be either discrete or continuous. Discrete neurons send an output signal of 1 if the sum of received signals is above a certain critical value called a threshold value, otherwise they send an output signal of 0. Continuous neurons are not restricted to sending output values of only 1s and 0s; instead they send an output value between 1 and 0 depending on the total amount of input that they receive—the stronger the received signal, the stronger the signal sent out from the node and vice-versa. Continuous neurons are the most commonly used in actual artificial neural networks.

The architecture of a neural network is the specific arrangement and connections of the neurons that make up the network. One of the most common neural network architectures has three layers. The first layer is called the input layer and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final layer of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient as possible in discriminating relevant features or patterns from the input layer.

Neural networks differ greatly from traditional computers (for example personal computers, workstations, mainframes) in both form and function. While neural networks use a large number of simple processors to do their calculations, traditional computers generally use one or a few extremely complex processing units. Neural networks also do not have a centrally located memory, nor are they programmed with a sequence of instructions, as are all traditional computers.

The information processing of a neural network is distributed throughout the network in the form of its processors and connections, while the memory is distributed in the form of the weights given to the various connections. The distribution of both processing capability and memory means that damage to part of the network does not necessarily result in processing dysfunction or information loss. This ability of neural networks to withstand limited damage and continue to function well is one of their greatest strengths.

Neural networks also differ greatly from traditional computers in the way they are programmed. Rather than using programs that are written as a series of instructions, as do all traditional computers, neural networks are ‘taught’ with a limited set of training examples. The network is then able to ‘learn’ from the initial examples to respond to information sets that it has never encountered before. The resulting values of the connection weights can be thought of as a ‘program’.

Neural networks are usually simulated on traditional computers. The advantage of this approach is that computers can easily be reprogrammed to change the architecture or learning rule of the simulated neural network. Since the computation in a neural network is massively parallel, the processing speed of a simulated neural network can be increased by using massively parallel computers - computers that link together hundreds or thousands of CPUs in parallel to achieve very high processing speeds.

In all biological neural networks the connections between particular dendrites and axons may be reinforced or discouraged. For example, connections may become reinforced as more signals are sent down them, and may be discouraged when signals are infrequently sent down them. The reinforcement of certain neural pathways, or dendrite-axon connections, results in a higher likelihood that a signal will be transmitted along that path, further reinforcing the pathway. Paths between neurons that are rarely used slowly atrophy, or decay, making it less likely that signals will be transmitted along them.

The role of connection strengths between neurons in the brain is crucial; scientists believe they determine, to a great extent, the way in which the brain processes the information it takes in through the senses. Neuroscientists studying the structure and function of the brain believe that various patterns of neurons firing can be associated with specific memories. In this theory, the strength of the connections between the relevant neurons determines the strength of the memory. Important information that needs to be remembered may cause the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.

To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a node, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signals add up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of a how a biological neuron functions - the larger the input signal, the larger the output signal.

Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data. For instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weights between the neurons, the photographs are digitally entered into the input layer of the neural network and an output signal is computed and compared to the target output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.

A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signaled.

A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called error back-propagation, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 1984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network’s calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weights according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point it is ready to calculate another output.

Neural networks have been applied to many tasks that are easy for humans to accomplish, but difficult for traditional computers. Because neural networks mimic the brain, they have shown much promise in so-called sensory processing tasks such as speech recognition, pattern recognition, and the transcription of hand-written text. In some settings, neural networks can perform as well as humans. Neural-network-based backgammon software, for example, rivals the best human players.

While traditional computers still outperform neural networks in most situations, neural networks are superior in recognizing patterns in extremely large data sets. Furthermore, because neural networks have the ability to learn from a set of examples and generalize this knowledge to new situations, they are excellent for work requiring adaptive control systems. For this reason, the United States National Aeronautics and Space Administration (NASA) has extensively studied neural networks to determine whether they might serve to control future robots sent to explore planetary bodies in our solar system. In this application, robots could be sent to other planets, such as Mars, to carry out significant and detailed exploration autonomously.

An important advantage that neural networks have over traditional computer systems is that they can sustain damage and still function properly. This design characteristic of neural networks makes them very attractive candidates for future aircraft control systems, especially in high performance military jets. Another potential use of neural networks for civilian and military use is in pattern recognition software for radar, sonar, and other remote-sensing devices.

Learning, is an acquiring commitment to knowledge or developing the ability to perform new behaviors. It is common to think of learning as something that takes place in school, but much of human learning occurs outside the classroom, and people continue to learn throughout their lives.

Even before they enter school, young children learn to walk, to talk, and to use their hands to manipulate toys, food, and other objects. They use all of their senses to learn about the sights, sounds, tastes, and smells in their environments. They learn how to interact with their parents, siblings, friends, and other people important to their world. When they enter school, children learn basic academic subjects such as reading, writing, and mathematics. They also continue to learn a great deal outside the classroom. They learn which behaviors are likely to be rewarded and which are likely to be punished. They learn social skills for interacting with other children. After they finish school, people must learn to adapt to the many major changes that affect their lives, such as getting married, raising children, and finding and keeping a job.

Because learning continues throughout our lives and affects almost everything we do, the study of learning is important in many different fields. Teachers need to understand the best ways to educate children. Psychologists, social workers, criminologists, and other human-service workers need to understand how certain experiences change people’s behaviors. Employers, politicians, and advertisers make use of the principles of learning to influence the behavior of workers, voters, and consumers.

Learning is closely related to memory, which is the storage of information in the brain. Psychologists who study memory are interested in how the brain stores knowledge, where this storage takes place, and how the brain later retrieves knowledge when we need it. In contrast, psychologists who study learning are more interested in behavior and how behavior changes as a result of a person’s experiences.

There are many forms of learning, ranging from simple to complex. Simple forms of learning involve a single stimulus. A stimulus is anything perceptible to the senses, such as a sight, sound, smell, touch, or taste. In a form of learning known as classical conditioning, people learn to associate two stimuli that occur in sequence, such as lightning followed by thunder. In operant conditioning, people learn by forming an association between a behavior and its consequences (reward or punishment). People and animals can also learn by observation - that is, by watching others perform behaviors. More complex forms of learning include learning languages, concepts, and motor skills.

This article discusses general principles of learning. For information about the application of learning principles to formal education,

Habituation, one of the simplest types of learning, is the tendency to become familiar with a stimulus after repeated exposure to it. A common example of habituation occurs in the orienting response, in which a person’s attention is captured by a loud or sudden stimulus. For example, a person who moves to a house on a busy street may initially be distracted (an orienting response) every time a loud vehicle drives by. After living in the house for some time, however, the person will no longer be distracted by the street noise - the person becomes habituated to it and the orienting response disappears.

Despite its simplicity, habituation is a very useful type of learning. Because our environments are full of sights and sounds, we would waste a tremendous amount of time and energy if we paid attention to every stimulus each time we encountered it. Habituation allows us to ignore repetitive, unimportant stimuli. Habituation occurs in nearly all organisms, from human beings to animals with very simple nervous systems. Even some one-celled organisms will habituate to a light, sound, or chemical stimulus that is presented repeatedly.

Sensitization, another simple form of learning, is the increase that occurs in an organism’s responsiveness to stimuli following an especially intense or irritating stimulus. For example, a sea snail that receives a strong electric shock will afterward withdraw its gill more strongly than usual in response to a simple touch. Depending on the intensity and duration of the original stimulus, the period of increased responsiveness can last from several seconds to several days.

Another form of learning is classical conditioning, in which a reflexive or automatic response transfers from one stimulus to another. For instance, a person who has had painful experiences at the dentist’s office may become fearful at just the sight of the dentist’s office building. Fear, a natural response to a painful stimulus, has transferred to a different stimulus, the sight of a building. Most psychologists believe that classical conditioning occurs when a person forms a mental association between two stimuli, so that encountering one stimulus makes the person think of the other. People tend to form these mental associations between events or stimuli that occur closely together in space or time.

Classical conditioning was discovered by accident in the early 1900s by Russian physiologist Ivan Pavlov. Pavlov was studying how saliva aids the digestive process. He would give a dog some food and measure the amount of saliva the dog produced while it ate the meal. After the dog had gone through this procedure a few times, however, it would begin to salivate before receiving any food. Pavlov reasoned that some new stimulus, such as the experimenter in his white coat, had become associated with the food and produced the response of salivation in the dog. Pavlov spent the rest of his life studying this basic type of associative learning, which is now called classical conditioning or Pavlovian conditioning.

The conditioning process usually follows the same general procedure. Suppose a psychologist wants to condition a dog to salivate at the sound of a bell. Before conditioning, an unconditioned stimulus (food in the mouth) automatically produces an unconditioned response (salivation) in the dog. The term unconditioned indicates that there is an unlearned, or inborn, connection between the stimulus and the response. During conditioning, the experimenter rings a bell and then gives food to the dog. The bell is called the neutral stimulus because it does not initially produce any salivation response in the dog. As the experimenter repeats the bell-food association over and over again, however, the bell alone eventually causes the dog to salivate. The dog has learned to associate the bell with the food. The bell has become a conditioned stimulus, and the dog’s salivation to the sound of the bell is called a conditioned response.

Following his initial discovery, Pavlov spent more than three decades studying the processes underlying classical conditioning. He and his associates identified four main processes: acquisition, extinction, generalization, and discrimination.

The acquisition phase is the initial learning of the conditioned response - for example, the dog learning to salivate at the sound of the bell. Several factors can affect the speed of conditioning during the acquisition phase. The most important factors are the order and timing of the stimuli. Conditioning occurs most quickly when the conditioned stimulus (the bell) precedes the unconditioned stimulus (the food) by about half a second. Conditioning takes longer and the response is weaker when there is a long delay between the presentation of the conditioned stimulus and the unconditioned stimulus. If the conditioned stimulus follows the unconditioned stimulus - for example, if the dog receives the food before the bell is rung - conditioning seldom occurs.

Once learned, a conditioned response is not necessarily permanent. The term extinction is used to describe the elimination of the conditioned response by repeatedly presenting the conditioned stimulus without the unconditioned stimulus. If a dog has learned to salivate at the sound of a bell, an experimenter can gradually extinguish the dog’s response by repeatedly ringing the bell without presenting food afterward. Extinction does not mean, however, that the dog has simply unlearned or forgotten the association between the bell and the food. After extinction, if the experimenter lets a few hours pass and then rings the bell again, the dog will usually salivate at the sound of the bell once again. The reappearance of an extinguished response after some time has passed is called spontaneous recovery.

After an animal has learned a conditioned response to one stimulus, it may also respond to similar stimuli without further training. If a child is bitten by a large black dog, the child may fear not only that dog, but other large dogs. This phenomenon is called generalization. Less similar stimuli will usually produce less generalization. For example, the child may show little fear of smaller dogs.

The opposite of generalization is discrimination, in which an individual learns to produce a conditioned response to one stimulus but not to another stimulus that is similar. For example, a child may show a fear response to freely roaming dogs, but may show no fear when a dog is on a leash or confined to a pen.

After studying classical conditioning in dogs and other animals, psychologists became interested in how this type of learning might apply to human behavior. In an infamous 1921 experiment, American psychologist John B. Watson and his research assistant Rosalie Rayner conditioned a baby named Albert to fear a small white rat by pairing the sight of the rat with a loud noise. Although their experiment was ethically questionable, it showed for the first time that humans can learn to fear seemingly unimportant stimuli when the stimuli are associated with unpleasant experiences. The experiment also suggested that classical conditioning accounts for some cases of phobias, which are irrational or excessive fears of specific objects or situations. Psychologists now know that classical conditioning explains many emotional responses - such as happiness, excitement, anger, and anxiety - that people have to specific stimuli. For example, a child who experiences excitement on a roller coaster may learn to feel excited just at the sight of a roller coaster. For an adult who finds a letter from a close friend in the mailbox, the mere sight of the return address on the envelope may elicit feelings of joy and warmth.

Psychologists use classical conditioning procedures to treat phobias and other unwanted behaviors, such as alcoholism and addictions. To treat phobias of specific objects, the therapist gradually and repeatedly presents the feared object to the patient while the patient relaxes. Through extinction, the patient loses his or her fear of the object. In one treatment for alcoholism, patients drink an alcoholic beverage and then ingest a drug that produces nausea. Eventually they feel nauseous at the sight or smell of alcohol and stop drinking it. The effectiveness of these therapies varies depending on the individual and on the problem behavior.

Modern theories of classical conditioning depart from Pavlov’s theory in several ways. Whereas Pavlov’s theory stated that the conditioned and unconditioned stimuli should elicit the same type of response, modern theories acknowledge that the conditioned and unconditioned responses frequently differ. In some cases, especially when the unconditioned stimulus is a drug, the conditioned stimulus elicits the opposite response. Modern research has also shown that conditioning does not always require a close pairing of the two stimuli. In taste-aversion learning, people can develop disgust for a specific food if they become sick after eating it, even if the illness begins several hours after eating.

Psychologists today also recognize that classical conditioning does not automatically occur whenever two stimuli are repeatedly paired. For instance, suppose that an experimenter conditions a dog to salivate to a light by repeatedly pairing the light with food. Next, the experimenter repeatedly pairs both the light and a tone with food. When the experimenter presents the tone by itself, the dog will show little or no conditioned response (salivation), because the tone provides no new information. The light already allows the dog to predict that food will be coming. This phenomenon, discovered by American psychologist Leon Kamin in 1968, is called blocking because prior conditioning blocks new conditioning.

One of the most widespread and important types of learning is operant conditioning, which involves increasing a behavior by following it with a reward, or decreasing a behavior by following it with punishment. For example, if a mother starts giving a boy his favorite snack every day that he cleans up his room, before long the boy may spend some time each day cleaning his room in anticipation of the snack. In this example, the boy’s room-cleaning behavior increases because it is followed by a reward or reinforcer.

Unlike classical conditioning, in which the conditioned and unconditioned stimuli are presented regardless of what the learner does, operant conditioning requires action on the part of the learner. The boy in the above example will not get his snack unless he first cleans up his room. The term operant conditioning refers to the fact that the learner must operate, or perform a certain behavior, before receiving a reward or punishment.

Some of the earliest scientific research on operant conditioning was conducted by American psychologist Edward L. Thorndike at the end of the 19th century. Thorndike’s research subjects included cats, dogs, and chickens. To see how animals learn new behaviors, Thorndike used a small chamber that he called a puzzle box. He would place an animal in the puzzle box, and if it performed the correct response (such as pulling a rope, pressing a lever, or stepping on a platform), the door would swing open and the animal would be rewarded with some food located just outside the cage. The first time an animal entered the puzzle box, it usually took a long time to make the response required to open the door. Eventually, however, it would make the appropriate response by accident and receive its reward: escape and food. As Thorndike placed the same animal in the puzzle box again and again, it would make the correct response more and more quickly. Soon it would take the animal just a few seconds to earn its reward.

Based on these experiments, Thorndike developed a principle he called the law of effect. This law states that behaviors that are followed by pleasant consequences will be strengthened, and will be more likely to occur in the future. Conversely, behaviors that are followed by unpleasant consequences will be weakened, and will be less likely to be repeated in the future. Thorndike’s law of effect is another way of describing what modern psychologists now call operant conditioning.

American psychologist B. F. Skinner became one of the most famous psychologists in history for his pioneering research on operant conditioning. In fact, he coined the term operant conditioning. Beginning in the 1930s, Skinner spent several decades studying the behavior of animals - usually rats or pigeons - in chambers that became known as Skinner boxes. Like Thorndike’s puzzle box, the Skinner box was a barren chamber in which an animal could earn food by making simple responses, such as pressing a lever or a circular response key. A device attached to the box recorded the animal’s responses. The Skinner box differed from the puzzle box in three main ways: (1) upon making the desired response, the animal received food but did not escape from the chamber; (2) the box delivered only a small amount of food for each response, so that many reinforcers could be delivered in a single test session; and (3) the operant response required very little effort, so an animal could make hundreds or thousands of responses per hour. Because of these changes, Skinner could collect much more data, and he could observe how changing the pattern of food delivery affected the speed and pattern of an animal’s behavior.

Skinner became famous not just for his research with animals, but also for his controversial claim that the principles of learning he discovered using the Skinner box also applied to the behavior of people in everyday life. Skinner acknowledged that many factors influence human behavior, including heredity, basic types of learning such as classical conditioning, and complex learned behaviors such as language. However, he maintained that rewards and punishments control the great majority of human behaviors, and that the principles of operant conditioning can explain these behaviors.

In a career spanning more than 60 years, Skinner identified a number of basic principles of operant conditioning that explain how people learn new behaviors or change existing behaviors. The main principles are reinforcement, punishment, shaping, extinction, discrimination, and generalization.

`In operant conditioning, reinforcement refers to any process that strengthens a particular behavior - that is, increases the chances that the behavior will occur again. There are two general categories of reinforcement, positive and negative. The experiments of Thorndike and Skinner illustrate positive reinforcement, a method of strengthening behavior by following it with a pleasant stimulus. Positive reinforcement is a powerful method for controlling the behavior of both animals and people. For people, positive reinforcers include basic items such as food, drink, sex, and physical comfort. Other positive reinforcers include material possessions, money, friendship, love, praise, attention, and success in one’s career.

Depending on the circumstances, positive reinforcement can strengthen either desirable or undesirable behaviors. Children may work hard at home or at school because of the praise they receive from parents and teachers for good performance. However, they may also disrupt a class, try dangerous stunts, or start smoking because these behaviors lead to attention and approval from their peers. One of the most common reinforcers of human behavior is money. Most adults spend many hours each week working at their jobs because of the paychecks they receive in return. For certain individuals, money can also reinforce undesirable behaviors, such as burglary, selling illegal drugs, and cheating on one’s taxes.

Negative reinforcement is a method of strengthening a behavior by following it with the removal or omission of an unpleasant stimulus. There are two types of negative reinforcement: escape and avoidance. In escape, performing a particular behavior leads to the removal of an unpleasant stimulus. For example, if a person with a headache tries a new pain reliever and the headache quickly disappears, this person will probably use the medication again the next time a headache occurs. In avoidance, people perform a behavior to avoid unpleasant consequences. For example, drivers may take side streets to avoid congested intersections, citizens may pay their taxes to avoid fines and penalties, and students may do their homework to avoid detention.

A reinforcement schedule is a rule that specifies the timing and frequency of reinforcers. In his early experiments on operant conditioning, Skinner rewarded animals with food every time they made the desired response - a schedule known as continuous reinforcement. Skinner soon tried rewarding only some instances of the desired response and not others - a schedule known as partial reinforcement. To his surprise, he found that animals showed entirely different behavior patterns.

Skinner and other psychologists found that partial reinforcement schedules are often more effective at strengthening behavior than continuous reinforcement schedules, for two reasons. First, they usually produce more responding, at a faster rate. Second, a behavior learned through a partial reinforcement schedule has greater resistance to extinction - if the rewards for the behavior are discontinued, the behavior will persist for a longer period of time before stopping. One reason extinction is slower after partial reinforcement is that the learner has become accustomed to making responses without receiving a reinforcer each time. There are four main types of partial reinforcement schedules: fixed-ratio, variable-ratio, fixed-interval, and variable-interval. Each produces a distinctly different pattern of behavior.

On a fixed-ratio schedule, individuals receive a reinforcer each time they make a fixed number of responses. For example, a factory worker may earn a certain amount of money for every 100 items assembled. This type of schedule usually produces a stop-and-go pattern of responding: The individual works steadily until receiving one reinforcer, then takes a break, then works steadily until receiving another reinforcer, and so on.

On a variable-ratio schedule, individuals must also make a number of responses before receiving a reinforcer, but the number is variable and unpredictable. Slot machines, roulette wheels, and other forms of gambling are examples of variable-ratio schedules. Behaviors reinforced on these schedules tend to occur at a rapid, steady rate, with few pauses. Thus, many people will drop coins into a slot machine over and over again on the chance of winning the jackpot, which serves as the reinforcer.

On a fixed-interval schedule, individuals receive reinforcement for their response only after a fixed amount of time elapses. For example, in a laboratory experiment with a fixed-interval one-minute schedule, at least one minute must elapse between the deliveries of the reinforcer. Any responses that occur before one minute has passed have no effect. On these schedules, animals usually do not respond at the beginning of the interval, but they respond faster and faster as the time for reinforcement approaches. Fixed-interval schedules rarely occur outside the laboratory, but one close approximation is the clock-watching behavior of students during a class. Students watch the clock only occasionally at the start of a class period, but they watch more and more as the end of the period gets nearer.

Variable-interval schedules also require the passage of time before providing reinforcement, but the amount of time is variable and unpredictable. Behavior on these schedules tends to be steady, but slower than on ratio schedules. For example, a person trying to call someone whose phone line is busy may redial every few minutes until the call gets through.

Whereas reinforcement strengthens behavior, punishment weakens it, reducing the chances that the behavior will occur again. As with reinforcement, there are two kinds of punishment, positive and negative. Positive punishment involves reducing a behavior by delivering an unpleasant stimulus if the behavior occurs. Parents use positive punishment when they spank, scold, or shout at children for bad behavior. Societies use positive punishment when they fine or imprison people who break the law. Negative punishment, also called omission, involves reducing a behavior by removing a pleasant stimulus if the behavior occurs. Parents’ tactics of grounding teenagers or taking away various privileges because of bad behavior are examples of negative punishment.

Considerable controversy exists about whether punishment is an effective way of reducing or eliminating unwanted behaviors. Careful laboratory experiments have shown that, when used properly, punishment can be a powerful and effective method for reducing behavior. Nevertheless, it has several disadvantages. When people are severely punished, they may become angry, aggressive, or have other negative emotional reactions. They may try to hide the evidence of their misbehavior or escape from the situation, as when a punished child runs away from home. In addition, punishment may eliminate desirable behaviors along with undesirable ones. For example, a child who is scolded for making an error in the classroom may not raise his or her hand again. For these and other reasons, many psychologists recommend that punishment be used to control behavior only when there is no realistic alternative.

Shaping is a reinforcement technique that is used to teach animals or people behaviors that they have never performed before. In this method, the teacher begins by reinforcing a response the learner can perform easily, and then gradually requires more and more difficult responses. For example, to teach a rat to press a lever that is over its head, the trainer can first reward any upward head movement, then an upward movement of at least one inch, then two inches, and so on, until the rat reaches the lever. Psychologists have used shaping to teach children with severe mental retardation to speak by first rewarding any sounds they make, and then gradually requiring sounds that more and more closely resemble the words of the teacher. Animal trainers at circuses and theme parks use shaping to teach elephants to stand on one leg, tigers to balance on a ball, dogs to do backward flips, and killer whales and dolphins to jump through hoops.

As in classical conditioning, responses learned in operant conditioning are not always permanent. In operant conditioning, extinction is the elimination of a learned behavior by discontinuing the reinforcer of that behavior. If a rat has learned to press a lever because it receives food for doing so, its lever-pressing will decrease and eventually disappear if food is no longer delivered. With people, withholding the reinforcer may eliminate some unwanted behaviors. For instance, parents often reinforce temper tantrums in young children by giving them attention. If parents simply ignore the child’s tantrums rather than reward them with attention, the number of tantrums should gradually decrease.

Generalization and discrimination occur in operant conditioning in much the same way that they do in classical conditioning. In generalization, people perform a behavior learned in one situation in other, similar situations. For example, a man who is rewarded with laughter when he tells certain jokes at a bar may tell the same jokes at restaurants, parties, or wedding receptions. Discrimination is learning that a behavior will be reinforced in one situation but not in another. The man may learn that telling his jokes in church or at a serious business meeting will not make people laugh. Discriminative stimuli signal that a behavior is likely to be reinforced. The man may learn to tell jokes only when he is at a loud, festive occasion (the discriminative stimulus). Learning when a behavior will and will not be reinforced is an important part of operant conditioning.

Operant conditioning techniques have practical applications in many areas of human life. Parents who understand the basic principles of operant conditioning can reinforce their children’s appropriate behaviors and punish inappropriate ones, and they can use generalization and discrimination techniques to teach which behaviors are appropriate in particular situations. In the classroom, many teachers reinforce good academic performance with small rewards or privileges. Companies have used lotteries to improve attendance, productivity, and job safety among their employees.

Psychologists known as behavior therapists use the learning principles of operant conditioning to treat children or adults with behavior problems or psychological disorders. Behavior therapists use shaping techniques to teach basic job skills to adults with mental retardation. Therapists use reinforcement techniques to teach self-care skills to people with severe mental illnesses, such as schizophrenia, and use punishment and extinction to reduce aggressive and antisocial behaviors by these individuals. Psychologists also use operant conditioning techniques to treat stuttering, sexual disorders, marital problems, drug addictions, impulsive spending, eating disorders, and many other behavioral problems.

Although classical and operant conditioning are important types of learning, people learn a large portion of what they know through observation. Learning by observation differs from classical and operant conditioning because it does not require direct personal experience with stimuli, reinforcers, or punishers. Learning by observation involves simply watching the behavior of another person, called a model, and later imitating the model’s behavior.

Both children and adults learn a great deal through observation and imitation. Young children learn language, social skills, habits, fears, and many other everyday behaviors by observing their parents and older children. Many people learn academic, athletic, and musical skills by observing and then imitating a teacher. According to Canadian-American psychologist Albert Bandura, a pioneer in the study of observational learning, this type of learning plays an important role in a child’s personality development. Bandura found evidence that children learn traits such as industriousness, honesty, self-control, aggressiveness, and impulsiveness in part by imitating parents, other family members, and friends.

Psychologists once thought that only human beings could learn by observation. They now know that many kinds of animals - including birds, cats, dogs, rodents, and primates - can learn by observing other members of their species. Young animals can learn food preferences, fears, and survival skills by observing their parents. Adult animals can learn new behaviors or solutions to simple problems by observing other animals.

In the early 1960s Bandura and other researchers conducted a classic set of experiments that demonstrated the power of observational learning. In one experiment, a preschool child worked on a drawing while a television set showed an adult behaving aggressively toward a large inflated Bobo doll (a clown doll that bounces back up when knocked down). The adult pummeled the doll with a mallet, kicked it, flung it in the air, sat on it, and beat it in the face, while yelling such remarks as ‘Sock him in the nose . . kick him . . . Pow!’ The child was then left in another room filled with interesting toys, including a Bobo doll. The experimenters observed the child through one-way glass. Compared with children who witnessed a nonviolent adult model and those not exposed to any model, children who witnessed the aggressive display were much more likely to show aggressive behaviors toward the Bobo doll, and they often imitated the model's exact behaviors and hostile words.

In a variant of the original experiment, Bandura and colleagues examined the effect of observed consequences on learning. They showed four-year-old children one of three films of an adult acting violently toward a Bobo doll. In one version of the film, the adult was praised for his or her aggressive behavior and given soda and candies. In another version, the adult was scolded, spanked, and warned not to behave that way again. In a third version, the adult was neither rewarded nor punished. After viewing the film, each child was left alone in a room that contained a Bobo doll and other toys. Many children imitated the adult’s violent behaviors, but children who saw the adult punished imitated the behaviors less often than children who saw the other films. However, when the researchers promised the children a reward if they could copy the adult’s behavior, all three groups of children showed large and equal amounts of violent behavior toward the Bobo doll.

Bandura concluded that even those children who did not see the adult model receive a reward had learned through observation, but these children (especially those who saw the model being punished) would not display what they had learned until they expected a reward for doing so. The term latent learning describes cases in which an individual learns a new behavior but does not perform this behavior until there is the possibility of obtaining a reward.

According to Bandura’s influential theory of imitation, also called social learning theory, four factors are necessary for a person to learn through observation and then imitate a behavior: attention, retention, reproduction, and motivation. First, the learner must pay attention to the crucial details of the model’s behavior. A young girl watching her father bake a cake will not be able to imitate this behavior successfully unless she pays attention to many important details - ingredients, quantities, oven temperature, baking time, and so on. The second factor is retention - the learner must be able to retain all of this information in memory until it is time to use it. If the person forgets important details, he or she will not be able to successfully imitate the behavior. Third, the learner must have the physical skills and coordination needed for reproduction of the behavior. The young girl must have enough strength and dexterity to mix the ingredients, pour the batter, and so on, in order to bake a cake on her own. Finally, the learner must have the motivation to imitate the model. That is, learners are more likely to imitate a behavior if they expect it to lead to some type of reward or reinforcement. If learners expect that imitating the behavior will not lead to reward or might lead to punishment, they are less likely to imitate the behavior.

An alternative to Bandura’s theory is the theory of generalized imitation. This theory states that people will imitate the behaviors of others if the situation is similar to cases in which their imitation was reinforced in the past. For example, when a young child imitates the behavior of a parent or an older sibling, this imitation is often reinforced with smiles, praise, or other forms of approval. Similarly, when children imitate the behaviors of friends, sports stars, or celebrities, this imitation may be reinforced - by the approval of their peers, if not their parents. Through the process of generalization, the child will start to imitate these models in other situations. Whereas Bandura’s theory emphasizes the imitator’s thought processes and motivation, the theory of generalized imitation relies on two basic principles of operant conditioning - reinforcement and generalization.

Many factors determine whether or not a person will imitate a model. As already shown, children are more likely to imitate a model when the model’s behavior has been reinforced than when it has been punished. More important, however, are the expected consequences to the learner. A person will imitate a punished behavior if he or she thinks that imitation will produce some type of reinforcement.

The characteristics of the model also influence the likelihood of imitation. Studies have shown that children are more likely to imitate adults who are pleasant and attentive to them than those who are not. In addition, children more often imitate adults who have substantial influence over their lives, such as parents and teachers, and those who seem admired and successful, such as celebrities and athletes. Both children and adults are more likely to imitate models who are similar to them in sex, age, and background. For this reason, when behavior therapists use modeling to teach new behaviors or skills, they try to use models who are similar to the learners.

In modern society, television provides many powerful models for children and abundant opportunities for observational learning. Many parents are concerned about the behaviors their children can observe on TV. Many television programs include depictions of sex, violence, drug and alcohol use, and vulgar language—behaviors that most parents do not want their children to imitate. Studies have found that by early adolescence, the average American child has watched thousands of dramatized murders and countless other acts of violence on television.

For many years, psychologists have debated the question of whether watching violence on television has detrimental effects on children. A number of experiments, both inside and outside the laboratory, have found evidence that viewing television violence is related to increased aggression in children. Some psychologists have criticized this research, maintaining that the evidence is inconclusive. Most psychologists now believe, however, that watching violence on television can sometimes lead to increased aggressiveness in children.

The effects of television on children’s behaviors are not all negative. Educational programs such as ‘Sesame Street’ give children the opportunity to learn letters of the alphabet, words, numbers, and social skills. Such programs also show people who solve problems and resolve differences through cooperation and discussion rather than through aggression and hostility.

Although psychologists who study learning have focused the most attention on classical conditioning, operant conditioning, and observational learning, they have also studied other types of learning, including language learning, learning by listening and reading, concept formation, and the learning of motor skills. These types of learning still involve the principles of conditioning and observational learning, but they are worth considering separately because of their importance in everyday life.

Learning to speak and understand a language is one of the most complex types of learning, yet all normal children master this skill in the first few years of their lives. The familiar principles of shaping, reinforcement, generalization, discrimination, and observational learning all play a role in a child’s language learning. However, in the 1950s American linguist Noam Chomsky proposed that these basic principles of learning cannot explain how children learn to speak so well and so rapidly. Chomsky theorized that humans have a unique and inborn capacity to extract word meanings, sentence structure, and grammatical rules from the complex stream of sounds they hear. Although Chomsky’s theory is controversial, it has received some support from scientific evidence that specific parts of the human brain are essential for language. When these areas of the brain are damaged, a person loses the ability to speak or comprehend language.

Because people communicate through language, they can learn vast amounts of information by listening to others and by reading. Learning through the spoken or written word is similar to observational learning, because it allows people to learn not simply from their own experiences, but also from the experiences of others. For example, by listening to a parent or instructor, children can learn to avoid busy streets and to cross the street at crosswalks without first experiencing any positive or negative consequences. By listening to and observing others, children can learn skills such as tying a shoelace, swinging a baseball bat, or paddling a canoe. Listening to the teacher and reading are essential parts of most classroom learning.

Much of what we read and hear is quickly forgotten. Learning new information requires that we retain the information in memory and later be able to retrieve it. The process of forming long-term memories is complex, depending on the nature of the original information and on how much a person rehearses or reviews the information.

Concept formation occurs when people learn to classify different objects as members of a single category. For example, a child may know that a mouse, a dog, and a whale are all animals, despite their great differences in size and appearance. Concept formation is important because it helps us identify stimuli we have never encountered before. Thus, a child who sees an antelope for the first time will probably know that it is an animal. Even young children learn a large number of such concepts, including food, games, flowers, cars, and houses. Although language plays an important role in how people learn concepts, the ability to speak is not essential for concept formation. Experiments with birds and chimpanzees have shown that these animals can form concepts.

A motor skill is the ability to perform a coordinated set of physical movements. Examples of motor skills include handwriting, typing, playing a musical instrument, driving a car, and most sports skills. Learning a motor skill is usually a gradual process that requires practice and feedback. Learners need feedback from a teacher or coach to tell them which movements they are performing well and which need improvement. While learning a new motor skill, the learner should direct full attention to the task. Some motor skills, if learned well, can be performed automatically. For example, a skilled typist can type quickly and accurately without thinking about every keystroke.

Early in the 20th century, some psychologists believed that it might be possible to develop a single, general theory that could explain all instances of learning. For instance, the so-called one-factor theory proposed that reinforcement was the single factor that controlled whether learning would or would not occur. However, latent learning and similar phenomena contradicted this theory by showing that learning could occur without reinforcement.

In recent years, psychologists have abandoned attempts to develop a single, all-purpose theory of learning. Instead, they have developed smaller and more specialized theories. Some theories focus on classical conditioning, some on operant conditioning, some on observational learning, and some on other specific forms of learning. The major debates in learning theory concern which theories best describe these more specific areas of learning.

In studying learning, psychologists follow two main theoretical approaches: the behavioral approach and the cognitive approach. Recall that learning is acquiring knowledge or developing the ability to perform new behaviors. Behavioral psychologists focus on the change that takes place in an individual’s behavior. Cognitive psychologists prefer to study the change in an individual’s knowledge, emphasizing mental processes such as thinking, memory, and problem solving. Many psychologists combine elements of both approaches to explain learning.

The term behaviorism was first used by John B. Watson in the early 1910s. Later, B. F. Skinner expanded and popularized the behavioral approach. The essential characteristic of the behavioral approach to learning is that events in the environment are understood to predict a person’s behavior, not thoughts, feelings, or other events that take place inside the person. Strict behaviorists believe that it is dangerous and unscientific to treat thoughts and feelings as the causes of a person’s behavior, because no one can see another person’s thoughts or feelings. Behaviorists maintain that human learning can be explained by examining the stimuli, reinforcers, and punishments that a person experiences. According to behaviorists, reinforcement and punishment, along with other basic principles such as generalization and discrimination, can explain even the most advanced types of human learning, such as learning to read or to solve complex problems.

Unlike behaviorists, cognitive psychologists believe that it is essential to study an individual’s thoughts and expectations in order to understand the learning process. In 1930 American psychologist Edward C. Tolman investigated cognitive processes in learning by studying how rats learn their way through a maze. He found evidence that rats formed a ‘cognitive map’ (a mental map) of the maze early in the experiment, but did not display their learning until they received reinforcement for completing the maze - a phenomenon he termed latent learning. Tolman’s experiment suggested that learning is more than just the strengthening of responses through reinforcement.

Modern cognitive psychologists believe that learning involves complex mental processes, including memory, attention, language, concept formation, and problem solving. They study how people process information and form mental representations of people, objects, and events.

During the first half of the 20th century, behaviorism was the dominant theoretical approach in the field of learning. Since the 1950s, however, cognitive psychology has steadily gained in popularity, and now more psychologists favor a cognitive approach than a strict behavioral approach. Cognitive psychologists and behaviorists will continue to debate the merits of their different positions, but in many ways these two approaches have different strengths that complement each other. With its emphasis on memory and complex thought processes, the cognitive approach appears well suited for investigating the most sophisticated types of human learning, such as reasoning, problem solving, and creativity. The behavioral approach, which emphasizes basic principles of conditioning, reinforcement, and punishment, can provide explanations of why people behave the way they do and how they choose between different possible courses of action.

A variety of factors determine an individual’s ability to learn and the speed of learning. Four important factors are the individual’s age, motivation, prior experience, and intelligence. In addition, certain developmental and learning disorders can impair a person’s ability to learn.

Animals and people of all ages are capable of the most common types of learning - habituation, classical conditioning, and operant conditioning. As children grow, they become capable of learning more and more sophisticated types of information. Swiss developmental psychologist Jean Piaget theorized that children go through four different stages of cognitive development. In the sensorimotor stage (from birth to about 2 years of age), infants use their senses to learn about their bodies and about objects in their immediate environments. In the preoperational stage (about 2 to 7 years of age), children can think about objects and events that are not present, but their thinking is primitive and self-centered, and they have difficulty seeing the world from another person’s point of view. In the concrete operational stage (about 7 to 11 years of age), children learn general rules about the physical world, such as the fact that the amount of water remains the same if it is poured between containers of different shapes. Finally, in the formal operational stage (ages 11 and up), children become capable of logical and abstract thinking.

Adults continue to learn new knowledge and skills throughout their lives. For example, most adults can successfully learn a foreign language, although children usually can achieve fluency more easily. If older adults remain healthy, their learning ability generally does not decline with age. Age-related illnesses that involve a deterioration of mental functioning, such as Alzheimer’s disease, can severely reduce a person’s ability to learn.

Learning is usually most efficient and rapid when the learner is motivated and attentive. Behavioral studies with both animals and people have shown that one effective way to maintain the learner’s motivation is to deliver strong and immediate reinforcers for correct responses. However, other research has indicated that very high levels of motivation are not ideal. Psychologists believe an intermediate level of motivation is best for many learning tasks. If a person’s level of motivation is too low, he or she may give up quickly. At the other extreme, a very high level of motivation may cause such stress and distraction that the learner cannot focus on the task.

How well a person learns a new task may depend heavily on the person’s previous experience with similar tasks. Just as a response can transfer from one stimulus to another through the process of generalization, people can learn new behaviors more quickly if the behaviors are similar to those they can already perform. This phenomenon is called positive transfer. Someone who has learned to drive one car, for example, will be able to drive other cars, even though the feel and handling of the cars will differ. In cases of negative transfer, however, a person’s prior experience can interfere with learning something new. For instance, after memorizing one shopping list, it may be more difficult to memorize a different shopping list.

Psychologists have long known that people differ individually in their level of intelligence, and thus in their ability to learn and understand. Scientists have engaged in heated debates about the definition and nature of intelligence. In the 1980s American psychologist Howard Gardner proposed that there are many different forms of intelligence, including linguistic, logical-mathematical, musical, and interpersonal intelligence. A person may easily learn skills in some categories but have difficulty learning in others.

A variety of disorders can interfere with a person’s ability to learn new skills and behaviors. Learning and developmental disorders usually first appear in childhood and often persist into adulthood. Children with attention-deficit hyperactivity disorder (ADHD) may not be able to sit still long enough to focus on specific tasks. Children with autism typically have difficulty speaking, understanding language, and interacting with people. People with mental retardation, characterized primarily by very low intelligence, may have trouble mastering basic living tasks and academic skills. Children with learning or developmental disorders often receive special education tailored to their individual needs and abilities.

Metaphysics represents the branch of philosophy concerned with the nature of ultimate reality. Metaphysics is customarily divided into ontology, which deals with the question of how many fundamentally distinct sorts of entities compose the universe, and metaphysics proper, which is concerned with describing the most general traits of reality. These general traits together define reality and would presumably characterize any universe whatever. Because these traits are not peculiar to this universe, but are common to all possible universes, metaphysics may be conducted at the highest level of abstraction. Ontology, by contrast, because it investigates the ultimate divisions within this universe, is more closely related to the physical world of human experience.

The term metaphysics is believed to have originated in Rome about 70 Bc, with the Greek Peripatetic philosopher Andronicus of Rhodes (flourished 1st century Bc) in his edition of the works of Aristotle. In the arrangement of Aristotle's works by Andronicus, the treatise originally called First Philosophy, or Theology, followed the treatise Physics. Hence, the First Philosophy came to be known as meta (ta) physica, or ‘following (the) Physics,’ later shortened to Metaphysics. The word took on the connotation, in popular usage, of matters transcending material reality. In the philosophic sense, however, particularly as opposed to the use of the word by occultists, metaphysics applies to all reality and is distinguished from other forms of inquiry by its generality.

The subjects treated in Aristotle's Metaphysics (substance, causality, the nature of being, and the existence of God) fixed the content of metaphysical speculation for centuries. Among the medieval Scholastic philosophers, metaphysics was known as the ‘transphysical science’ on the assumption that, by means of it, the scholar philosophically could make the transition from the physical world to a world beyond sense perception. The 13th-century Scholastic philosopher and theologian St. Thomas Aquinas declared that the cognition of God, through a causal study of finite sensible beings, was the aim of metaphysics. With the rise of scientific study in the 16th century the reconciliation of science and faith in God became an increasingly important problem.

Before the time of the German philosopher Immanuel Kant metaphysics was characterized by a tendency to construct theories on the basis of a priori knowledge, that is, knowledge derived from reason alone, in contradistinction to a posteriori knowledge, which is gained by reference to the facts of experience. From a priori knowledge were deduced general propositions that were held to be true of all things. The method of inquiry based on a priori principles is known as rationalistic. This method may be subdivided into monism, which holds that the universe is made up of a single fundamental substance; dualism, the belief in two such substances; and pluralism, which proposes the existence of many fundamental substances.

The monists, agreeing that only one basic substance exists, differ in their descriptions of its principal characteristics. Thus, in idealistic monism the substance is believed to be purely mental; in materialistic monism it is held to be purely physical, and in neutral monism it is considered neither exclusively mental nor solely physical. The idealistic position was held by the Irish philosopher George Berkeley, the materialistic by the English philosopher Thomas Hobbes, and the neutral by the Dutch philosopher Baruch Spinoza. The latter expounded a pantheistic view of reality in which the universe is identical with God and everything contains God's substance. The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysics is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to experience makes these categories or structural principles transcendental; they transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Some of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism in opposition to Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories are radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; according to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism the will is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analyzed in terms of actual or possible occurrences, or phenomena, and that anything that cannot be analyzed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.

In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as ‘Nothing exists except material particles’ and ‘Everything is part of one all-encompassing spirit’ cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.

The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individual's relationship to it are extremely important and meaningful in terms of human life. The investigation of these questions is therefore considered valid whether or not its results can be verified objectively.

Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. In the U.S. metaphysics has been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine. Sellars has sought to express metaphysical questions in linguistic terms, and Quine has attempted to determine whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.



The Cognition to which is concerned with which are an act or process of knowing. Cognition includes attention, perception, memory, reasoning, judgment, imagining, thinking, and speech. Attempts to explain the way in which cognition works are as old as philosophy itself; the term, in fact, comes from the writings of Plato and Aristotle. With the advent of psychology as a discipline separate from philosophy, cognition has been investigated from several viewpoints.

An entire field - cognitive psychology - has arisen since the 1950s. It studies cognition mainly from the standpoint of information handling. Parallels are stressed between the functions of the human brain and the computer concepts such as the coding, storing, retrieving, and buffering of information. The actual physiology of cognition is of little interest to cognitive psychologists, but their theoretical models of cognition have deepened understanding of memory, psycholinguistics, and the development of intelligence.

Social psychologists since the mid-1960s have written extensively on the topic of cognitive consistency - that is, the tendency of a person's beliefs and actions to be logically consistent with one another. When cognitive dissonance, or the lack of such consistency, arises, the person unconsciously seeks to restore consistency by changing his or her behavior, beliefs, or perceptions. The manner in which a particular individual classifies cognitions in order to impose order has been termed cognitive style.

Process philosophy is merely the theoretical existence from which of a speculative world view asserts that it is the basic reality and constantly in a process of flux and change. Indeed, reality is identified with pure process. Concepts such as creativity, freedom, novelty, emergence, and growth are fundamental explanatory categories for process philosophy. This metaphysical perspective is to be contrasted with a philosophy of substance, the view that a fixed and permanent reality underlies the changing or fluctuating world of ordinary experience. Whereas substance philosophy emphasizes static being, process philosophy emphasizes dynamic becoming.

Although process philosophy is as old as the 6th-century Bc Greek philosopher Heraclitus, renewed interest in it was stimulated in the 19th century by the theory of evolution. Key figures in the development of modern process philosophy were the British philosophers Herbert Spencer, Samuel Alexander, and Alfred North Whitehead, the American philosophers Charles S. Peirce and William James, and the French philosophers Henri Bergson and Pierre Teilhard de Chardin. Whitehead's Process and Reality: An Essay in Cosmology (1929) is generally considered the most important systematic expression of process philosophy.

Contemporary theology has been strongly influenced by process philosophy. The American theologian Charles Hartshorne, for instance, rather than interpreting God as an unchanging absolute, emphasizes God's sensitive and caring relationship with the world. A personal God enters into relationships in such a way that he is affected by the relationships, and to be affected by relationships is to change. So God too is in the process of growth and development. Important contributions to process theology have also been made by such theologians as William Temple, Daniel Day Williams, Schubert Ogden, and John Cobb, Jr..

Realism in philosophy, is a term used for two distinct doctrines of epistemology. In modern philosophy, it is applied to the doctrine that ordinary objects of sense perception, such as tables and chairs, have an existence independent of their being perceived. In this sense, it is contrary to the idealism of philosophers such as George Berkeley or Immanuel Kant. In its extreme form, sometimes called naive realism, the things perceived by the senses are believed to be exactly what they appear to be. In more sophisticated versions, sometimes referred to as critical realism, some explanation is given of the relationship between the object and the observer that accounts for the possibility of illusion, hallucination, and other perceptual errors.

In medieval philosophy, the term realism referred to a position that regarded Platonic Forms, or universals, as real. That position is now usually called Platonic realism. In Plato's philosophy, a common noun, such as bed, refers to the ideal nature of the object, which is conveyed by its definition, and this ideal nature has metaphysical existence independent of the particular objects of that type. Thus, circularity exists independent of particular circles; justice, independent of particular just individuals or just states; and ‘bedness,’ independent of particular beds. In the Middle Ages, this position was defended against nominalism, which denied the existence of such universals. Nominalists asserted that the many objects called by one name shared nothing but the name. Compromises between these two positions included moderate realism, which claimed that the universal existed in the many objects of the same type but not independent of them, and conceptualism, which held that the universal might exist independent of the many objects of that particular type, but only as an idea in the mind, not as a self-subsisting metaphysical entity.

In addition we find now, that Idealism in philosophy pertains to the theory of reality and the knowledge that attributes to consciousness, or the immaterial mind, a primary role in the constitution of the world. More narrowly, within metaphysics, idealism is the view that all physical objects are mind-dependent and can have no existence apart from a mind that is conscious of them. This view is contrasted with materialism, which maintains that consciousness itself is reducible to purely physical elements and processes - thus, according to the materialistic view, the world is entirely mind-independent, composed only of physical objects and physical interactions. In epistemology, idealism is opposed to realism, the view that mind-independent physical objects exist that can be known through the senses. Metaphysical realism has traditionally led to epistemological skepticism, the doctrine that knowledge of reality is impossible, and has thereby provided an important motivation for theories of idealism, which contend that reality is mind-dependent and that true knowledge of reality is gained by relying upon a spiritual or conscious source.

In the 5th and 4th centuries Bc, Plato postulated the existence of a realm of Ideas that the varied objects of common experience imperfectly reflect. He maintained that these ideal Forms are not only more clearly intelligible but also more real than the transient and essentially illusory objects themselves

Eighteenth-century Irish philosopher George Berkeley speculated that all aspects of everything of which one is conscious are actually reducible to the ideas present in the mind. The observer does not conjure external objects into existence, however; the true ideas of them are caused in the human mind directly by God. Eighteenth-century German philosopher Immanuel Kant greatly refined idealism through his critical inquiry into what he believed to be the limits of possible knowledge. Kant held that all that can be known of things is the way in which they appear in experience; there is no way of knowing what they are substantially in themselves. He also held, however, that the fundamental principles of all science are essentially grounded in the constitution of the mind rather than being derived from the external world.

Nineteenth-century German philosopher Georg Wilhelm Friedrich Hegel disagreed with Kant's theory concerning the inescapable human ignorance of what things are in themselves, instead arguing for the ultimate intelligibility of all existence. Hegel also maintained that the highest achievements of the human spirit (culture, science, religion, and the state) are not the result of naturally determined processes in the mind, but are conceived and sustained by the dialectical activity of free, reflective intellect. Further strains of idealistic thought can be found in the works of 19th-century Germans Johann Gottlieb Fichte and F. W. J. Schelling, 19th-century Englishman F. H. Bradley, 19th-century Americans Charles Sanders Peirce and Josiah Royce, and 20th-century Italian Benedetto Croce.

States of Consciousness, whereby no simple, agreed-upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.

Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from ‘states of reality’ (consciousness) to ‘states of tendency’ (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focused on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was ‘dimensionalized’ into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviorism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, ‘I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.’ Psychologists then turned almost exclusively to behavior, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. Much of the surge in sleep and dream research was directly fueled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness.

During the 1960s, an increased search for ‘higher levels’ of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focused attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of ‘alpha training’ programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now largely been demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focused attention.

Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.

As the concept of a direct, simple linkage between environment and behavior became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behavior has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored. An entirely new area called cognitive psychology has emerged that centers on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behavior, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasized in favor of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

Semantics (Greek semantikos, ‘significant’), is considered as the study of the meaning of linguistic signs - that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as ‘What is the meaning of (the word) X?’ They do this by studying what signs are, as well as how signs possess significance - that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - what they stand for - with the process of assigning those meanings.

Grammar belongs of a branch of linguistics dealing with the form and structure of words (morphology), and their interrelation in sentences (syntax). The study of grammar reveals how language works.

Most people first encounter grammar in connection with the study of their own or of a second language in school. This kind of grammar is called normative, or prescriptive, because it defines the role of the various parts of speech and purports to tell what is the norm, or rule, of ‘correct’ usage. Prescriptive grammars state how words and sentences are to be put together in a language so that the speaker will be perceived as having good grammar. When people are said to have good or bad grammar, the inference is that they obey or ignore the rules of accepted usage associated with the language they speak.

Language-specific prescriptive grammar is only one way to look at word and sentence formation in language. Other grammarians are primarily interested in the changes in word and sentence construction in a language over the years - for example, how Old English, Middle English, and Modern English differ from one another; this approach is known as historical grammar. Some grammarians seek to establish the differences or similarities in words and word order in various languages. Thus, specialists in comparative grammar study sound and meaning correspondences among languages to determine their relationship to one another. By looking at similar forms in related languages, grammarians can discover how different languages may have influenced one another. Still other grammarians investigate how words and word order are used in social contexts to get messages across; this is called functional grammar.

Some grammarians are more concerned, however, with determining how the meaningful arrangement of the basic word-building units (morphemes) and sentence-building units (constituents) can best be described. This approach is called descriptive grammar. Descriptive grammars contain actual speech forms recorded from native speakers of a particular language and represented by means of written symbols. Descriptive grammars indicate what languages - often those never before written down or otherwise recorded - are like structurally.

These approaches to grammar (prescriptive, historical, comparative, functional, and descriptive) focus on word building and word order; they are concerned only with those aspects of language that have structure. These types of grammar constitute a part of linguistics that is distinct from phonology (the linguistic study of sound) and semantics (the linguistic study of meaning or content). Grammar to the prescriptivist, historian, comparativist, functionalist, and descriptivist is then the organizational part of language - how speech is put together, how words and sentences are formed, and how messages are communicated.

Specialists called transformational-generative grammarians, such as the American linguistic scholar Noam Chomsky, approach grammar quite differently - as a theory of language. By language, these scholars mean the knowledge human beings have that allows them to acquire any language. Such a grammar is a kind of universal grammar, an analysis of the principles underlying all the various human grammars.

The study of grammar began with the ancient Greeks, who engaged in philosophical speculation about languages and described language structure. This grammatical tradition was passed on to the Romans, who translated the Greek names for the parts of speech and grammatical endings into Latin; many of these terms (nominative, accusative, dative) are still found in modern grammars. But the Greeks and Romans were unable to determine how languages are related. This problem spurred the development of comparative grammar, which became the dominant approach to linguistic science in the 19th century.

Early grammatical study appears to have gone hand in hand with efforts to understand archaic writings. Thus, grammar was originally tied to societies with long-standing written traditions. The earliest extant grammar is that of the Sanskrit language of India, compiled by the Indian grammarian Panini (flourished about 400 Bc). This sophisticated analysis showed how words are formed and what parts of words carry meaning. Ultimately, the grammars of Panini and other Hindu scholars helped in the interpretation of Hindu religious literature written in Sanskrit. The Arabs are believed to have begun the grammatical study of their language before medieval times. In the 10th century the Jews completed a Hebrew lexicon; they also produced a study of the language of the Old Testament.

The Greek grammarian Dionysius Thrax wrote the Art of Grammar, upon which many later Greek, Latin, and other European grammars were based. With the spread of Christianity and the translation of the Scriptures into the languages of the new Christians, written literatures began to develop among previously nonliterate peoples. By the Middle Ages, European scholars generally knew, in addition to their own languages and Latin, the languages of their nearest neighbors. This access to several languages set scholars to thinking about how languages might be compared. The revival of classical learning in the Renaissance laid the foundation, however, for a misguided attempt by grammarians to fit all languages into the structure of Greek and Latin. More positively, medieval Christianity and Renaissance learning led to 16th- and 17th-century surveys of all the then-known languages in an attempt to determine which language might be the oldest. On the basis of the Bible, Hebrew was frequently so designated. Other languages - Dutch, for example - were also chosen because of accidental circumstances rather than linguistic facts. In the 18th century less haphazard comparisons began to be made, culminating in the assumption by the German philosopher Gottfried Wilhelm Leibniz that most languages of Europe, Asia, and Egypt came from the same original language - a language referred to as Indo-European.

In the 19th century scholars developed systematic analyses of parts of speech, mostly built on the earlier analyses of Sanskrit. The early Sanskrit grammar of Panini was a valuable guide in the compilation of grammars of the languages of Europe, Egypt, and Asia. This writing of grammars of related languages, using Panini's work as a guide, is known as Indo-European grammar, a method of comparing and relating the forms of speech in numerous languages.

The Renaissance approach to grammar, which based the description of all languages on the model of Greek and Latin, died slowly, however. Not until the early 20th century did grammarians began to describe languages on their own terms. Noteworthy in this regard are the Handbook of American Indian Languages (1911), the work of the German American anthropologist Franz Boas and his colleagues; and the studies by the Danish linguist Otto Jespersen, A Modern English Grammar (pub. in four parts, 1909-31), and The Philosophy of Grammar (1924). Boas's work formed the basis of various types of American descriptive grammar study. Jespersen's work was the precursor of such current approaches to linguistic theory as transformational generative grammar.

Boas challenged the application of conventional methods of language study to those non-Indo-European languages with no written records, such as the ones spoken by Native North Americans. He saw grammar as a description of how human speech in a language is organized. A descriptive grammar should describe the relationships of speech elements in words and sentences. Given impetus by the fresh perspective of Boas, the approach to grammar known as descriptive linguistics became dominant in the U.S. during the first half of the 20th century.

Jespersen, like Boas, thought grammar should be studied by examining living speech rather than by analyzing written documents, but he wanted to ascertain what principles are common to the grammars of all languages, both at the present time (the so-called synchronic approach) and throughout history (the diachronic approach). Descriptive linguists developed precise and rigorous methods to describe the formal structural units in the spoken aspect of any language. The approach to grammar that developed with this view is known as structural. A structural grammar should describe what the Swiss linguist Ferdinand de Saussure referred to by the French word langue - denoting the system underlying a particular language - that is, what members of a speech community speak and hear that will pass as acceptable grammar to other speakers and hearers of that language. Actual speech forms (referred to by the structuralists by the French word parole) represent instances of langue but, in themselves, are not what a grammar should describe. The structuralist approach to grammar conceives of a particular language such as French, Swahili, Chinese, or Arabic as a system of elements at various levels - sound, word, sentence, meaning - that interrelate. A structuralist grammar therefore describes what relationships underlie all instances of speech in a particular language; a descriptive grammar describes the elements of transcribed (recorded, spoken) speech.

By the mid-20th century, Chomsky, who had studied structural linguistics, was seeking a way to analyze the syntax of English in a structural grammar. This effort led him to see grammar as a theory of language structure rather than a description of actual sentences. His idea of grammar is that it is a device for producing the structure, not of langue (that is, not of a particular language), but of competence - the ability to produce and understand sentences in any and all languages. His universalist theories are related to the ideas of those 18th- and early 19th-century grammarians who urged that grammar be considered a part of logic - the key to analyzing thought. Universal grammarians such as the British philosopher John Stuart Mill, writing as late as 1867, believed rules of grammar to be language forms that correspond to universal thought forms.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, plus an approach known as general semantics. Philosophers look at the behavior that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as ‘the victor at Jena’ and ‘the loser at Waterloo,’ both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a ‘science of significations’ that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen - plume - of the color red - rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition - a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign ‘the moon is a sphere’ is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs - by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his ‘ordinary language’ philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in ‘the moon is a sphere’); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic - that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone - independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs) - often nominal arguments (noun phrases) - or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression ‘Bill gives Mary the book,’‘gives’ is an operator that relates the arguments ‘Bill,’‘Mary,’ and ‘the book.’

Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, ‘kiss’ belongs to an expression class with other items such as ‘hit’ and ‘see,’ as well as to the conventional part of speech ‘verb,’ in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In ‘Mary kissed John,’ the syntactic role of ‘kiss’ is to relate two nominal arguments (‘Mary’ and ‘John’), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, ‘John’ has the same semantic role - to identify a person - in the following two sentences: ‘John is easy to please’ and ‘John is eager to please.’ The syntactic role of ‘John’ in the two sentences, however, is different: In the first, ‘John’ is the receiver of an action; in the second, ‘John’ is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs - usually single words as vocabulary items called lexemes - in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain ‘seat’ in English, the lexemes ‘chair,’‘sofa,’‘loveseat,’ and ‘bench’ can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning ‘something on which to sit.’

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as ‘Colorless green ideas sleep furiously’), although grammatical expressions, are meaningless - semantically blocked. The rules must also account for how a sentence such as ‘They passed the port at midnight’ can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence ‘Colorless green ideas sleep furiously’ has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as ‘They passed the port at midnight’), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech - that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behavior. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.

Language, is the principal means used by human beings to communicate with one another. Language is primarily spoken, although it can be transferred to other media, such as writing. If the spoken means of communication is unavailable, as may be the case among the deaf, visual means such as sign language can be used. A prominent characteristic of language is that the relation between a linguistic sign and its meaning is arbitrary: There is no reason other than convention among speakers of English that a dog should be called dog, and indeed other languages have different names (for example, Spanish perro, Russian sobaka, Japanese inu). Language can be used to discuss a wide range of topics, a characteristic that distinguishes it from animal communication. The dances of honey bees, for example, can be used only to communicate the location of food sources. While the language-learning abilities of apes have surprised many - and there continues to be controversy over the precise limits of these abilities - scientists and scholars generally agree that apes do not progress beyond the linguistic abilities of a two-year-old child.

Linguistics is the scientific study of language. Several of the subfields of linguistics that will be discussed here are concerned with the major components of language: Phonetics is concerned with the sounds of languages, phonology with the way sounds are used in individual languages, morphology with the structure of words, syntax with the structure of phrases and sentences, and semantics with the study of meaning. Another major subfield of linguistics, pragmatics, studies the interaction between language and the contexts in which it is used. Synchronic linguistics studies a language's form at a fixed time in history, past or present. Diachronic, or historical, linguistics, on the other hand, investigates the way a language changes over time. A number of linguistic fields study the relations between language and the subject matter of related academic disciplines, such as sociolinguistics (sociology and language) and psycholinguistics (psychology and language). In principle, applied linguistics is any application of linguistic methods or results to solve problems related to language, but in practice it tends to be restricted to second-language instruction.

Spoken human language is composed of sounds that do not in themselves have meaning, but that can be combined with other sounds to create entities that do have meaning. Thus p, e, and n do not in themselves have any meaning, but the combination pen does have a meaning. Language also is characterized by complex syntax whereby elements, usually words, are combined into more complex constructions, called phrases, and these constructions in turn play a major role in the structures of sentences.

Because most languages are primarily spoken, an important part of the overall understanding of language involves the study of the sounds of language. Most sounds in the world's languages - and all sounds in some languages, such as English - are produced by expelling air from the lungs and modifying the vocal tract between the larynx and the lips. For instance, the sound p requires complete closure of the lips, so that air coming from the lungs builds up pressure in the mouth, giving rise to the characteristic popping sound when the lip closure is released. For the sounds, air from the lungs passes continuously through the mouth, but the tongue is raised sufficiently close to the alveolar ridge (the section of the upper jaw containing the tooth sockets) to cause friction as it partially blocks the air that passes. Sounds also can be produced by means other than expelling air from the lungs, and some languages use these sounds in regular speech. The sound used by English speakers to express annoyance, often spelled tsk or tut, uses air trapped in the space between the front of the tongue, the back of the tongue, and the palate. Such sounds, called clicks, function as regular speech sounds in the Khoisan languages of southwestern Africa and in the Bantu languages of neighboring African peoples.

Phonetics is the field of language study concerned with the physical properties of sounds, and it has three subfields. Articulatory phonetics explores how the human vocal apparatus produces sounds. Acoustic phonetics studies the sound waves produced by the human vocal apparatus. Auditory phonetics examines how speech sounds are perceived by the human ear. Phonology, in contrast, is concerned not with the physical properties of sounds, but rather with how they function in a particular language. The following example illustrates the difference between phonetics and phonology. In the English language, when the sound ‘k’ (usually spelled ‘c’) occurs at the beginning of a word, as in the word cut, it is pronounced with aspiration (a puff of breath). However, when this sound occurs at the end of a word, as in tuck, there is no aspiration. Phonetically, the aspirated ‘k’ and unaspirated ‘k’ are different sounds, but in English these different sounds never distinguish one word from another, and English speakers are usually unaware of the phonetic difference until it is pointed out to them. Thus English makes no phonological distinction between the aspirated and unaspirated ‘k’. The Hindi language, on the other hand, uses this sound difference to distinguish words such as kal (time), which has an unaspirated k, and khal (skin), in which kh represents the aspirated ‘k’. Therefore, in Hindi the distinction between the aspirated and unaspirated ‘k’ is both phonetic and phonological.

While many people, influenced by writing, tend to think of words as the basic units of grammatical structure, linguists recognize a smaller unit, the morpheme. The word cats, for instance, consists of two elements, or morphemes: cat, the meaning of which can be roughly characterized as ‘feline animal,’ and -s, the meaning of which can be roughly characterized as ‘more than one.’ Antimicrobial, meaning ‘capable of destroying microorganisms,’ can be divided into the morphemes anti- (against), microbe (microorganism), and -ial, a suffix that makes the word an adjective. The study of these smallest grammatical units, and the ways in which they combine into words, is called morphology.

Syntax is the study of how words combine to make sentences. The order of words in sentences varies from language to language. English-language syntax, for instance, generally follows a subject-verb-object order, as in the sentence ‘The dog (subject) bit (verb) the man (object). The sentence ‘The dog the man bit’ is not a correct construction in English, and the sentence ‘The man bit the dog’ has a very different meaning. In contrast, Japanese has a basic word order of subject-object-verb, as in ‘watakushi-wa hon-o kau,’ which literally translates to ‘I book buy.’ Hixkaryana, spoken by about 400 people on a tributary of the Amazon River in Brazil, has a basic word order of object-verb-subject. The sentence ‘toto yahosïye kamara,’ which literally translates to ‘Man grabbed jaguar,’ actually means that the jaguar grabbed the man, not that the man grabbed the jaguar.

A general characteristic of language is that words are not directly combined into sentences, but rather into intermediate units, called phrases, which then are combined into sentences. The sentence ‘The shepherd found the lost sheep’ contains at least three phrases: ‘the shepherd,’ ‘found,’ and ‘the lost sheep.’ This hierarchical structure that groups words into phrases, and phrases into sentences, serves an important role in establishing relations within sentences. For instance, the phrases ‘the shepherd’ and ‘the lost sheep’ behave as units, so that when the sentence is rearranged to be in the passive voice, these units stay intact: ‘The lost sheep was found by the shepherd.’

While the fields of language study mentioned above deal primarily with the form of linguistic elements, semantics is the field of study that deals with the meaning of these elements. A prominent part of semantics deals with the meaning of individual morphemes. Semantics also involves studying the meaning of the constructions that link morphemes to form phrases and sentences. For instance, the sentences ‘The dog bit the man’ and ‘The man bit the dog’ contain exactly the same morphemes, but they have different meanings. This is because the morphemes enter into different constructions in each sentence, reflected in the different word orders of the two sentences. Language acquisition, the process by which children and adults learn a language or languages, is a major field of linguistic study.

First-language acquisition is a complex process that linguists only partially understand. Young children have certain innate characteristics that predispose them to learn language. These characteristics include the structure of the vocal tract, which enables children to make the sounds used in language, and the ability to understand a number of general grammatical principles, such as the hierarchical nature of syntax. These characteristics, however, do not predispose children to learn only one particular language. Children acquire whatever language is spoken around them, even if their parents speak a different language. An interesting feature of early language acquisition is that children seem to rely more on semantics than on syntax when speaking. The point at which they shift to using syntax seems to be a crucial point at which human children surpass apes in linguistic ability.

Although second-language acquisition literally refers to learning a language after having acquired a first language, the term is frequently used to refer to the acquisition of a second language after a person has reached puberty. Whereas children experience little difficulty in acquiring more than one language, after puberty people generally must expend greater effort to learn a second language and they often achieve lower levels of competence in that language. People learn second languages more successfully when they become immersed in the cultures of the communities that speak those languages. People also learn second languages more successfully in cultures in which acquiring a second language is expected, as in most African countries, than they do in cultures in which second-language proficiency is considered unusual, as in most English-speaking countries.

Bilingualism is the ability to master the use of two languages, and multilingualism is the ability to master the use of more than two languages. Although bilingualism is relatively rare among native speakers of English, in many parts of the world it is the standard rather than the exception. For example, more than half the population of Papua New Guinea is functionally competent in both an indigenous language and Tok Pisin. People in many parts of the country have mastered two or more indigenous languages. Bilingualism and multilingualism often involve different degrees of competence in the languages involved. A person may control one language better than another, or a person might have mastered the different languages better for different purposes, using one language for speaking, for example, and another for writing. Languages constantly undergo changes, resulting in the development of different varieties of the languages.

A dialect is a variety of a language spoken by an identifiable subgroup of people. Traditionally, linguists have applied the term dialect to geographically distinct language varieties, but in current usage the term can include speech varieties characteristic of other socially definable groups. Determining whether two speech varieties are dialects of the same language, or whether they have changed enough to be considered distinct languages, has often proved a difficult and controversial decision. Linguists usually cite mutual intelligibility as the major criterion in making this decision. If two speech varieties are not mutually intelligible, then the speech varieties are different languages; if they are mutually intelligible but differ systematically from one another, then they are dialects of the same language. There are problems with this definition, however, because many levels of mutual intelligibility exist, and linguists must decide at what level speech varieties should no longer be considered mutually intelligible. This is difficult to establish in practice. Intelligibility has a large psychological component: If a speaker of one speech variety wants to understand a speaker of another speech variety, understanding is more likely than if this were not the case. In addition, chains of speech varieties exist in which adjacent speech varieties are mutually intelligible, but speech varieties farther apart in the chain are not. Furthermore, sociopolitical factors almost inevitably intervene in the process of distinguishing between dialects and languages. Such factors, for example, led to the traditional characterization of Chinese as a single language with a number of mutually unintelligible dialects.

Dialects develop primarily as a result of limited communication between different parts of a community that share one language. Under such circumstances, changes that take place in the language of one part of the community do not spread elsewhere. As a result, the speech varieties become more distinct from one another. If contact continues to be limited for a long enough period, sufficient changes will accumulate to make the speech varieties mutually unintelligible. When this occurs, and especially if it is accompanied by the sociopolitical separation of a group of speakers from the larger community, it usually leads to the recognition of separate languages. The different changes that took place in spoken Latin in different parts of the Roman Empire, for example, eventually gave rise to the distinct modern Romance languages, including French, Spanish, Portuguese, Italian, and Romanian.

In ordinary usage, the term dialect can also signify a variety of a language that is distinct from what is considered the standard form of that language. Linguists, however, consider the standard language to be simply one dialect of a language. For example, the dialect of French spoken in Paris became the standard language of France not because of any linguistic features of this dialect but because Paris was the political and cultural center of the country.

Sociolects are dialects determined by social factors rather than by geography. Sociolects often develop due to social divisions within a society, such as those of socioeconomic class and religion. In New York City, for example, the probability that someone will pronounce the letter ‘r’ when it occurs at the end of a syllable, as in the word fourth, varies with socioeconomic class. The pronunciation of a final r in general is associated with members of higher socioeconomic classes. The same is true in England of the pronunciation of ‘h’, as in hat. Members of certain social groups often adopt a particular pronunciation as a way of distinguishing themselves from other social groups. The inhabitants of Martha's Vineyard, Massachusetts, for example, have adopted particular vowel pronunciations to distinguish themselves from people vacationing on the island.

Slang, argot, and jargon are more specialized terms for certain social language varieties usually defined by their specialized vocabularies. Slang refers to informal vocabulary, especially short-lived coinages, that do not belong to a language's standard vocabulary. Argot refers to a nonstandard vocabulary used by secret groups, particularly criminal organizations, usually intended to render communications incomprehensible to outsiders. A jargon comprises the specialized vocabulary of a particular trade or profession, especially when it is incomprehensible to outsiders, as with legal jargon.

In addition to language varieties defined in terms of social groups, there are language varieties called registers that are defined by social situation. In a formal situation, for example, a person might say, ‘You are requested to leave,’ whereas in an informal situation the same person might say, ‘Get out!’ Register differences can affect pronunciation, grammar, and vocabulary.

A pidgin is an auxiliary language (a language used for communication by groups that have different native tongues) that develops when people speaking different languages are brought together and forced to develop a common means of communication without sufficient time to learn each other's native languages properly. Typically, a pidgin language derives most of its vocabulary from one of the languages. Its grammatical structure, however, will either be highly variable, reflecting the grammatical structures of each speaker's native language, or it may in time become stabilized in a manner very different from the grammar of the language that contributed most of its vocabulary. Historically, plantation societies in the Caribbean and the South Pacific have originated many pidgin languages. Tok Pisin is the major pidgin language of Papua New Guinea. Both its similarities to and its differences from English can be seen in the sentence ‘Pik bilong dispela man I kam pinis,’ meaning ‘This man's pig has come,’ or, more literally, ‘Pig belong this-fellow man he come finish.’

Since a pidgin is an auxiliary language, it has no native speakers. A creole language, on the other hand, arises in a contact situation similar to that which produces pidgin languages and perhaps goes through a stage in which it is a pidgin, but a creole becomes the native language of its community. As with pidgin languages, creoles usually take most of their vocabulary from a single language. Also as with pidgins, the grammatical structure of a creole language reflects the structures of the languages that were originally spoken in the community. A characteristic of creole languages is their simple morphology. In the Jamaican Creole sentence ‘A fain Jan fain di kluoz,’ meaning ‘John found the clothes,’ the vocabulary is of English origin, while the grammatical structure, which doubles the verb for emphasis, reflects West African language patterns. Because the vocabularies of Tok Pisin and Jamaican Creole are largely of English origin, they are called English-based.

Languages Spoken by More Than 10 Million People, Languages Spoken by 3 Million-10 Million People, Languages Spoken by 1 Million-3 Million People: Estimates of the number of languages spoken in the world today vary depending on where the dividing line between language and dialect is drawn. For instance, linguists disagree over whether Chinese should be considered a single language because of its speakers' shared cultural and literary tradition, or whether it should be considered several different languages because of the mutual unintelligibility of, for example, the Mandarin spoken in Beijing and the Cantonese spoken in Hong Kong. If mutual intelligibility is the basic criterion, current estimates indicate that there are about 6,000 languages spoken in the world today. However, many languages with a smaller number of speakers are in danger of being replaced by languages with large numbers of speakers. In fact, some scholars believe that perhaps 90 percent of the languages spoken in the 1990s will be extinct or doomed to extinction by the end of the 21st century. The 10 most widely spoken languages, with approximate numbers of native speakers, are as follows: Chinese, 1.2 billion; Arabic, 422 million; Hindi, 366 million; English, 341 million; Spanish, 322 to 358 million; Bengali, 207 million; Portuguese, 176 million; Russian, 167 million; Japanese, 125 million; German, 100 million. If second-language speakers are included in these figures, English is the second most widely spoken language, with 508 million speakers. Linguists classify languages using two main classification systems: typological and genetic. A typological classification system organizes languages according to the similarities and differences in their structures. Languages that share the same structure belong to the same type, while languages with different structures belong to different types. For example, despite the great differences between the two languages in other respects, Mandarin Chinese and English belong to the same type, grouped by word-order typology. Both languages have a basic word order of subject-verb-object.

A genetic classification of languages divides them into families on the basis of their historical development: A group of languages that descend historically from the same common ancestor form a language family. For example, the Romance languages form a language family because they all descended from the Latin language. Latin, in turn, belongs to a larger language family, Indo-European, the ancestor language of which is called Proto-Indo-European. Some genetic groupings are universally accepted. However, because documents attesting to the form of most ancestor languages, including Proto-Indo-European, have not survived, much controversy surrounds the more wide-ranging genetic groupings. A conservative survey of the world's language families follows.

The Indo-European languages are the most widely spoken languages in Europe, and they also extend into western and southern Asia. The family consists of a number of subfamilies or branches (groups of languages that descended from a common ancestor, which in turn is a member of a larger group of languages that descended from a common ancestor). Most of the people in northwestern Europe speak Germanic languages, which include English, German, and Dutch as well as the Scandinavian languages, such as Danish, Norwegian, and Swedish. The Celtic languages, such as Welsh and Gaelic, once covered a large part of Europe but are now restricted to its western fringes. The Romance languages, all descended from Latin, are the only survivors of a somewhat more extensive family, Italic, which includes, in addition to Latin, a number of now extinct languages of Italy (see Italic Languages). Languages of the Baltic and Slavic (Slavonic) branches are closely related. Only two of the Baltic languages survive: Lithuanian and Latvian. The Slavic languages, which cover much of eastern and central Europe, include Russian, Ukrainian, Polish, Czech, Serbo-Croatian, and Bulgarian. In the Balkan Peninsula, two branches of Indo-European exist that each consist of a single language - namely the Greek language and the Albanian language. Farther east, in Caucasia, the Armenian language constitutes another single-language branch of Indo-European.

The other main surviving branch of the Indo-European family is Indo-Iranian. It has two subbranches, Iranian and Indo-Aryan (Indic). Iranian languages are spoken mainly in southwestern Asia and include Persian, Pashto (spoken in Afghanistan), and Kurdish. Indo-Aryan languages are spoken in the northern part of South Asia (Pakistan, northern India, Nepal, and Bangladesh) and also in most of Sri Lanka. This branch includes Hindi-Urdu, Bengali, Nepali, and Sinhalese (the language spoken by the majority of people in Sri Lanka). Historical documents attest to other, now extinct, branches of Indo-European, such as the Anatolian languages, which were once spoken in what is now Turkey and include the ancient Hittite language.

The Uralic languages constitute the other main language family of Europe. They are spoken mostly in the northeastern part of the continent, spilling over into northwestern Asia; one language, Hungarian, is spoken in central Europe. Most Uralic languages belong to the family's Finno-Ugric branch. This branch includes (in addition to Hungarian) Finnish, Estonian, and Saami. Europe also has one language isolate (a language not known to be related to any other language): Basque, which is spoken in the Pyrenees. At the boundary between southeastern Europe and Asia lie the Caucasus Mountains. Since ancient times the region has contained a large number of languages, including two groups of languages that have not been definitively related to any other language families. The South Caucasian, or Kartvelian, languages are spoken in Georgia and include the Georgian language. The North Caucasian languages fall into North-West Caucasian, North-Central Caucasian, and North-East Caucasian subgroups. The genetic relation of North-West Caucasian to the other subgroups is not universally agreed upon. The North-West Caucasian languages include Abkhaz, the North-Central Caucasian languages include Chechen, and the North-East Caucasian languages include the Avar language.

South Asia contains, in addition to the Indo-Aryan branch of Indo-European, two other large language families. The Dravidian family is dominant in southern India and includes Tamil and Telugu. The Munda languages represent the Austro-Asiatic language family in India and contain many languages, each with relatively small numbers of speakers. The Austro-Asiatic family also spreads into Southeast Asia, where it includes the Khmer (Cambodian) and Vietnamese languages. South Asia contains at least one language isolate, Burushaski, spoken in a remote part of northern Pakistan. See also Indian Languages.

A number of linguists believe that many of the languages of central, northern, and eastern Asia form a single Altaic language family, although others consider Turkic, Tungusic, and Mongolic to be separate, unrelated language families. The Turkic languages include Turkish and a number of languages of the former Union of Soviet Socialist Republics (USSR), such as Uzbek and Tatar. The Tungusic languages are spoken mainly by small population groups in Siberia and Northeast China. This family includes the nearly extinct Manchu language. The main language of the Mongolic family is Mongolian. Some linguists also assign Korean and Japanese to the Altaic family, although others regard these languages as isolates. In northern Asia there are a number of languages that appear either to form small, independent families or to be language isolates, such as the Chukotko-Kamchatkan language family of the Chukchi and Kamchatka peninsulas in the far east of Russia. These languages are often referred to collectively as Paleo-Siberian (Paleo-Asiatic), but this is a geographic, not a genetic, grouping.

The Sino-Tibetan language family covers not only most of China, but also much of the Himalayas and parts of Southeast Asia. The family's major languages are Chinese, Tibetan, and Burmese. The Tai languages constitute another important language family of Southeast Asia. They are spoken in Thailand, Laos, and southern China and include the Thai language. The Miao-Yao, or Hmong-Mien, languages are spoken in isolated areas of southern China and northern Southeast Asia. The Austronesian languages, formerly called Malayo-Polynesian, cover the Malay Peninsula and most islands to the southeast of Asia and are spoken as far west as Madagascar and throughout the Pacific islands as far east as Easter Island. The Austronesian languages include Malay (called Bahasa Malaysia in Malaysia, and Bahasa Indonesia in Indonesia), Javanese, Hawaiian, and Maori (the language of the aboriginal people of New Zealand).

Although the inhabitants of some of the coastal areas and offshore islands of New Guinea speak Austronesian languages, most of the main island's inhabitants, as well as some inhabitants of nearby islands, speak languages unrelated to Austronesian. Linguists collectively refer to these languages as Papuan languages, although this is a geographical term covering about 60 different language families. The languages of Aboriginal Australians constitute another unrelated group, and it is debatable whether all Australian languages form a single family.

The languages of Africa may belong to as few as four families: Afro-Asiatic, Nilo-Saharan, Niger-Congo, and Khoisan, although the genetic unity of Nilo-Saharan and Khoisan is still disputed Afro-Asiatic languages occupy most of North Africa and also large parts of southwestern Asia. The family consists of several branches. The Semitic branch includes Arabic, Hebrew, and many languages of Ethiopia and Eritrea, including Amharic, the dominant language of Ethiopia. The Chadic branch, spoken mainly in northern Nigeria and adjacent areas, includes Hausa, one of the two most widely spoken languages of sub-Saharan Africa (the other being Swahili). Other subfamilies of Afro-Asiatic are Berber, Cushitic, and the single-language branch Egyptian, which contains the now-extinct language of the ancient Egyptians, Coptic Language.

The Niger-Congo family covers most of sub-Saharan Africa and includes such widely spoken West African languages as Yoruba and Fulfulde, as well as the Bantu languages of eastern and southern Africa, which include Swahili and Zulu. The Nilo-Saharan languages are spoken mainly in eastern Africa, in an area between those covered by the Afro-Asiatic and the Niger-Congo languages. The best-known Nilo-Saharan language is Masai, spoken by the Masai people in Kenya and Tanzania. The Khoisan languages are spoken in the southwestern corner of Africa and include the Nama language (formerly called Hottentot).

Most linguists separate the indigenous languages of the Americas into a large number of families and isolates, while one linguist has proposed grouping these languages into just three superfamilies. Nearly all specialists reject this proposal. Well-established families include Inuit-Aleut (Eskimaleut). The family stretches from the eastern edge of Siberia to the Aleutian Islands, and across Alaska and northern Canada to Greenland, where one variety of the Inuit language, Greenlandic, is an official language. The Na-Dené languages, the main branch of which comprises the Athapaskan languages, occupies much of northwestern North America. The Athapaskan languages also include, however, a group of languages in the southwestern United States, one of which is Navajo. Languages of the Algonquian and Iroquoian families constitute the major indigenous languages of northeastern North America, while the Siouan family is one of the main families of central North America.

The Uto-Aztecan family extends from the southwestern United States into Central America and includes Nahuatl, the language of the Aztec civilization and its modern descendants. The Mayan languages are spoken mainly in southern Mexico and Guatemala. Major language families of South America include Carib and Arawak in the north, and Macro-Gê and Tupian in the east. Guaraní, recognized as a national language in Paraguay alongside the official language, Spanish, is an important member of the Tupian family. In the Andes Mountains region, the dominant indigenous languages are Quechua and Aymara; the genetic relation of these languages to each other and to other languages remains controversial. .

Individual pidgin and creole languages pose a particular problem for genetic classification because the vocabulary and grammar of each comes from different sources. Consequently, many linguists do not try to classify them genetically. Pidgin and creole languages are found in many parts of the world, but there are particular concentrations in the Caribbean, West Africa, and the islands of the Indian Ocean and the South Pacific. English-based creoles such as Jamaican Creole and Guyanese Creole, and French-based creoles such as Haitian Creole, can be found in the Caribbean. English-based creoles are widespread in West Africa. About 10 percent of the population of Sierra Leone speaks Krio as a native language, and an additional 85 percent speaks it as a second language. The creoles of the Indian Ocean islands, such as Mauritius, are French-based. An English-based pidgin, Tok Pisin, is spoken by more than 2 million people in Papua New Guinea, making it the most widely spoken auxiliary language of that country. The inhabitants of Solomon Islands and Vanuatu speak similar varieties of Tok Pisin, called Pijin and Bislama, respectively.

International languages include both existing languages that have become international means of communication and languages artificially constructed to serve this purpose. The most famous and widespread artificial international language is Esperanto; however, the most widespread international languages are not artificial. In medieval Europe, Latin was the principal international language. Today, English is used in more countries as an official language or as the main means of international communication than any other language. French is the second most widely used language, largely due to the substantial number of African countries with French as their official language. Other languages have more restricted regional use, such as Spanish in Spain and Latin America, Arabic in the Middle East, and Russian in the republics of the former USSR.

Languages continually undergo changes, although speakers of a language are usually unaware of the changes as they are occurring. For instance, American English has an ongoing change whereby the pronunciation difference between the words cot and caught is being lost. The changes become more dramatic after longer periods of time. Modern English readers may require notes to understand fully the writings of English playwright William Shakespeare, who wrote during the late 16th and early 17th centuries. The English of 14th-century poet Geoffrey Chaucer differs so greatly from the modern language that many readers prefer a translation into modern English. Learning to read the writings of Alfred the Great, the 9th-century Saxon king, is comparable to acquiring a reading knowledge of German.

Historical change can affect all components of language. Sound change is the area of language change that has received the most study. One of the major sound changes in the history of the English language is the so-called Great Vowel Shift. This shift, which occurred during the 15th and 16th centuries, affected the pronunciation of all English long vowels (vowels that have a comparatively long sound duration). In Middle English, spoken from 1100 to 1500, the word house was pronounced with the vowel sound of the modern English word boot, while boot was pronounced with the vowel sound of the modern English boat. The change that affected the pronunciation of house also affected the vowels of mouse, louse, and mouth. This illustrates an important principle of sound change: It tends to be regular - that is, a particular sound change in a language tends to occur in the same way in all words.

The principle of the regularity of sound change has been particularly important to linguists when comparing different languages for genetic relatedness. Linguists compare root words from the different languages to see if they are similar enough to have once been the same word in a common ancestor language. By establishing that the sound differences between similar root words are the result of regular sound changes that occurred in the languages, linguists can support the conclusion that the different languages descended from the same original language. For example, by comparing the Latin word pater with its English translation, father, linguists might claim that the two languages are genetically related because of certain similarities between the two words. Linguists could then hypothesize that the Latin p had changed to f in English, and that the two words descended from the same original word. They could search for other examples to strengthen this hypothesis, such as the Latin word piscis and its English translation, fish, and the Latin pes and the English translation, foot. The sound change that relates f in the Germanic languages to p in most other branches of Indo-European is a famous sound change called Grimm's Law, named for German grammarian Jacob Grimm.

The morphology of a language can also change. An ongoing morphological change in English is the loss of the distinction between the nominative, or subject, form who and the accusative, or object, form whom. English speakers use both the who and whom forms for the object of a sentence, saying both ‘Who did you see?’ and ‘Whom did you see?’ However, English speakers use only the form who for a sentence's subject, as in ‘Who saw you?’ Old English, the historical form of English spoken from about 700 to about 1100, had a much more complex morphology than modern English. The modern English word stone has only three additional forms: the genitive singular stone's, the plural stones, and the genitive plural stones'. All three of these additional forms have the same pronunciation. In Old English these forms were all different from one another: stan, stanes, stanas, and stana, respectively. In addition, there was a dative singular form stane and a dative plural form stanum, used, for instance, after certain prepositions, as in under stanum (under stones).

Change can also affect syntax. In modern English, the basic word order is subject-verb-object, as in the sentence ‘I know John.’ The only other possible word order is object-subject-verb, as in ‘John I know (but Mary I don't).’ Old English, by contrast, allowed all possible word order permutations, including subject-object-verb, as in Gif hie ænigne feld secan wolden, meaning ‘If they wished to seek any field,’ or literally ‘If they any field to seek wished.’ The loss of word-order freedom is one of the main syntactic changes that separates the modern English language from Old English.

The meanings of words can also change. In Middle English, the word nice usually had the meaning ‘foolish,’ and sometimes ‘shy,’ but never the modern meaning ‘pleasant.’ Change in the meanings of words is known as semantic change and can be viewed as part of the more general phenomenon of lexical change, or change in a language's vocabulary. Words not only can change their meaning but also can become obsolete. For example, modern readers require a note to explain Shakespeare's word hent (take hold of), which is no longer in use. In addition, new words can be created, such as feedback.

While much change takes place in a given language without outside interference, many changes can result from contact with other languages. Linguists use the terms borrowing and loan to refer to instances in which one language takes something from another language. The most obvious cases of borrowing are in vocabulary. English, for example, has borrowed a large part of its vocabulary from French and Latin. Most of these borrowed words are somewhat more scholarly, as in the word human (Latin humanus), because the commonly used words of any language are less likely to be lost or replaced. However, some of the words borrowed into English are common, such as the French word very, which replaced the native English word sore in such phrases as sore afraid, meaning ‘very frightened.’ The borrowing of such common words reflects the close contact that existed between the English and the French in the period after the Norman Conquest of England in 1066.

Borrowing can affect not only vocabulary but also, in principle, all components of a language's grammar. The English suffix -er, which is added to verbs to form nouns, as in the formation of baker from bake, is ultimately a borrowing from the Latin suffix -arius. The suffix has been incorporated to such an extent, however, that it is used with indigenous words, such as bake, as well as with Latin words. Syntax also can be borrowed. For example, Amharic, a Semitic language of Ethiopia, has abandoned the usual Semitic word-order pattern, verb-subject-object, and replaced it with the word order subject-object-verb, borrowed from neighboring non-Semitic languages. Although in principle any component of language can be borrowed, some components are much more susceptible to borrowing than others. Cultural vocabulary is the most susceptible to borrowing, while morphology is the least susceptible.

Linguistic reconstruction is the recovery of the stages of a language that existed prior to those found in written documents. Using a number of languages that are genetically related, linguists try to reconstruct at least certain aspects of the languages' common ancestor, called the protolanguage. Linguists theorize that those features that are the same among the protolanguage's descendant languages, or those features that differ but can be traced to a common origin, can be considered features of the ancestor language. Nineteenth-century linguistic science made significant progress in reconstructing the Proto-Indo-European language. While many details of this reconstruction remain controversial, in general linguists have gained a good conception of Proto-Indo-European's phonology, morphology, and vocabulary. However, due to the range of syntactic variation among Proto-Indo-European's descendant languages, linguists have found syntactic reconstruction more problematic.

Language, although primarily oral, can also be represented in other media, such as writing. Under certain circumstances, spoken language can be supplanted by other media, as in sign language among the deaf. Writing can be viewed in one sense as a more permanent physical record of the spoken language. However, written and spoken languages tend to diverge from one another, partly because of the difference in medium. In spoken language, the structure of a message cannot be too complex because of the risk that the listener will misunderstand the message. Since the communication is face-to-face, however, the speaker has the opportunity to receive feedback from the listener and to clarify what the listener does not understand. Sentence structures in written communication can be more complex because readers can return to an earlier part of the text to clarify their understanding. However, the writer usually does not have the opportunity to receive feedback from the reader and to rework the text, so texts must be written with greater clarity. An example of this difference between written and spoken language is found in languages that have only recently developed written variants. In the written variants there is a rapid increase in the use of words such as because and however in order to make explicit links between sentences - links that are normally left implicit in spoken language.

Sign languages, which differ from signed versions of spoken languages, are the native languages of most members of deaf communities. Linguists have only recently begun to appreciate the levels of complexity and expressiveness found in sign languages. In particular, as in oral languages, sign languages are generally arbitrary in their use of signs: In general, no reason exists, other than convention, for a certain sign to have a particular meaning. Sign languages also exhibit dual patterning, in which a small number of components combine to produce the total range of signs, similar to the way in which letters combine to make words in English. In addition, sign languages use complex syntax and can discuss the same wide range of topics possible in spoken languages.

Body language refers to the conveying of messages through body movements other than those movements that form a part of sign or spoken languages. Some gestures can have quite specific meanings, such as those for saying good-bye or for asking someone to approach. Other gestures more generally accompany speech, such as those used to emphasize a particular point. Although there are cross-cultural similarities in body language, substantial differences also exist both in the extent to which body language is used and in the interpretations given to particular instances of body language. For example, the head gestures for ‘yes’ and ‘no’ used in the Balkans seem inverted to other Europeans. Also, the physical distance kept between participants in a conversation varies from culture to culture: A distance considered normal in one culture can strike someone from another culture as aggressively close.

In certain circumstances, other media can be used to convey linguistic messages, particularly when normal media are unavailable. For example, Morse code directly encodes a written message, letter by letter, so that it can be transmitted by a medium that allows only two values - traditionally, short and long signals or dots and dashes. Drums can be used to convey messages over distances beyond the human voice's reach - a method known as drum talk. In some cases, such communication methods serve the function of keeping a message secret from the uninitiated. This is often the case with whistle speech, a form of communication in which whistling substitutes for regular speech, usually used for communication over distances.

Phenomenology is a 20th-century philosophical movement dedicated to describing the structures of experience as they present themselves to consciousness, without recourse to theory, deduction, or assumptions from other disciplines such as the natural sciences

The founder of phenomenology, German philosopher Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie (1913; Ideas: A General Introduction to Pure Phenomenology,1931). Early followers of Husserl such as German philosopher Max Scheler, influenced by his previous book, Logische Untersuchungen (two volumes, 1900 and 1901; Logical Investigations, 1970), claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structures are the proper object of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent as well as real objects, Husserl noted that phenomenological reflection does not presuppose that anything exists, but rather amounts to a ‘bracketing of existence’ - that is, setting aside the question of the real existence of the contemplated object.

What Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, in addition to the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect; and such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. Later, in Méditations cartésiennes (1931; Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.

All phenomenologists follow Husserl in attempting to use pure description. Thus, they all subscribe to Husserl's slogan ‘To the things themselves.’ They differ among themselves, however, as to whether the phenomenological reduction can be performed, and as to what is manifest to the philosopher giving a pure description of experience. German philosopher Martin Heidegger, Husserl's colleague and most brilliant critic, claimed that phenomenology should make manifest what is hidden in ordinary, everyday experience. He thus attempted in Sein und Zeit (1927; Being and Time, 1962) to describe what he called the structure of everydayness, or being-in-the-world, which he found to be an interconnected system of equipment, social roles, and purposes.

Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible; and because human action consists of a direct grasp of objects, it is not necessary to posit a special mental entity called a meaning to account for intentionality. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analyzed by Husserl.

In the mid-1900s, French existentialist, Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and free consciousnesses. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologist, in that he denies the possibility of bracketing existence.

Phenomenology has had a pervasive influence on 20th-century thought. Phenomenological versions of theology, sociology, psychology, psychiatry, and literary criticism have been developed, and phenomenology remains one of the most important schools of contemporary philosophy.

` William James (1842-1910), American philosopher and psychologist, who developed the philosophy of pragmatism.James was born in New York City. His father, Henry James, Sr., was a Swedenborgian theologian; one of his brothers was the writer Henry James. William James attended private schools in the United States and Europe, the Lawrence Scientific School at Harvard University, and the Harvard Medical School, from which he received a degree in 1869. Before finishing his medical studies, he went on an exploring expedition in Brazil with Swiss American naturalist Louis Agassiz and also studied physiology in Germany. After three years of retirement due to illness, James became an instructor in physiology at Harvard in 1872. He taught psychology and philosophy at Harvard after 1880; he left Harvard in 1907 and gave highly successful lectures at Columbia University and the University of Oxford. James died in Chocorua, New Hampshire.

James’s first book, the monumental Principles of Psychology (1890), established him as one of the most influential thinkers of his time. The work advanced the principle of functionalism in psychology, thus removing psychology from its traditional place as a branch of philosophy and establishing it among the laboratory sciences based on experimental method.

In the next decade James applied his empirical methods of investigation to philosophical and religious issues. He explored the questions of the existence of God, the immortality of the soul, free will, and ethical values by referring to human religious and moral experience as a direct source. His views on these subjects were presented in the lectures and essays published in such books as The Will to Believe and Other Essays in Popular Philosophy (1897), Human Immortality (1898), and The Varieties of Religious Experience (1902). The last-named work is a sympathetic psychological account of religious and mystical experiences.

Later lectures published as Pragmatism: A New Name for Old Ways of Thinking (1907) summed up James’s original contributions to the theory called pragmatism, a term first used by the American logician C. S. Peirce. James generalized the pragmatic method, developing it from a critique of the logical basis of the sciences into a basis for the evaluation of all experience. He maintained that the meaning of ideas is found only in terms of their possible consequences. If consequences are lacking, ideas are meaningless. James contended that this is the method used by scientists to define their terms and to test their hypotheses, which, if meaningful, entail predictions. The hypotheses can be considered true if the predicted events take place. On the other hand, most metaphysical theories are meaningless, because they entail no testable predictions. Meaningful theories, James argued, are instruments for dealing with problems that arise in experience.

According to James’s pragmatism, then, truth is that which works. One determines what works by testing propositions in experience. In so doing, one finds that certain propositions become true. As James put it, ‘Truth is something that happens to an idea’ in the process of its verification; it is not a static property. This does not mean, however, that anything can be true. ‘The true is only the expedient in the way of our thinking, just as ‘the right’ is only the expedient in the way of our behaving,’ James maintained. One cannot believe whatever one wants to believe, because such self-centered beliefs would not work out.

James was opposed to absolute metaphysical systems and argued against doctrines that describe reality as a unified, monolithic whole. In Essays in Radical Empiricism (1912), he argued for a pluralistic universe, denying that the world can be explained in terms of an absolute force or scheme that determines the interrelations of things and events. He held that the interrelations, whether they serve to hold things together or apart, are just as real as the things themselves.

By the end of his life, James had become world-famous as a philosopher and psychologist. In both fields, he functioned more as an originator of new thought than as a founder of dogmatic schools. His pragmatic philosophy was further developed by American philosopher John Dewey and others; later studies in physics by Albert Einstein made the theories of interrelations advanced by James appear prophetic.

Analytic and Linguistic Philosophy, is a 20th-century philosophical movement and dominated most of Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and ‘Oxford philosophy.’ The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.

A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.

By contrast, many philosophers associated with the movement have focused on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.

Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frege, the 20th-century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English-speaking world.

For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as ‘time is unreal,’ analyses that then aided in determining the truth of such assertions.

Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements ‘John is good’ and ‘John is tall’ have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property ‘goodness’ as if it were a characteristic of John in the same way that the property ‘tallness’ is a characteristic of John. Such failure results in philosophical confusion.

Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that ‘all philosophy is a ‘critique of language’‘ and that ‘philosophy aims at the logical clarification of thoughts.’ The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.

Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).

The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition ‘two plus two equals four.’ The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.

The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.

This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.

Additional contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate ‘systematically misleading expressions’ in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.

Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.

Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analyzing ordinary language.

Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.

The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigor of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.

G. W. F. Hegel (1770-1831), the German idealist philosopher, who became one of the most influential thinkers of the 19th century. Hegel was born in Stuttgart on August 27, 1770, the son of a revenue officer with the civil service. He was brought up in an atmosphere of Protestant Pietism and became thoroughly acquainted with the Greek and Roman classics while studying at the Stuttgart gymnasium (preparatory school). Encouraged by his father to become a clergyman, Hegel entered the seminary at the University of Tübingen in 1788. There he developed friendships with the poet Friedrich Hölderlin and the philosopher Friedrich Wilhelm Joseph von Schelling. Having completed a course of study in philosophy and theology and having decided not to enter the ministry, Hegel became (1793) a private tutor in Bern, Switzerland. In 1797 he assumed a similar position in Frankfurt. Two years later his father died, leaving a financial legacy that was sufficient to free him from tutoring.

In 1801 Hegel went to the University of Jena, where he studied, wrote, and eventually became a lecturer. At Jena he completed The Phenomenology of Mind (1807; trans. 1910), one of his most important works. He remained at Jena until October 1806, when the city was taken by the French and he was forced to flee. Having exhausted the legacy left him by his father, Hegel became editor of the Bamberger Zeitung in Bavaria. He disliked journalism, however, and moved to Nürnberg, where he served for eight years as headmaster of a Gymnasium.

During the Nürnberg years Hegel met and married Marie von Tucher. Three children were born to the Hegels, a daughter, who died soon after birth, and two sons, Karl and Immanuel. Before his marriage, Hegel had fathered an illegitimate son, Ludwig, who eventually came to live with the Hegels. While at Nürnberg, Hegel published over a period of several years The Science of Logic (1812, 1813, 1816; trans. 1929). In 1816 Hegel accepted a professorship in philosophy at the University of Heidelberg. Soon after, he published in summary form a systematic statement of his entire philosophy entitled Encyclopedia of the Philosophical Sciences in Outline (1817; trans. 1959). In 1818 Hegel was invited to teach at the University of Berlin, where he was to remain. He died in Berlin on November 14, 1831, during a cholera epidemic.

The last full-length work published by Hegel was The Philosophy of Right (1821; trans. 1896), although several sets of his lecture notes, supplemented by students' notes, were published after his death. Published lectures include The Philosophy of Fine Art (1835-38; trans. 1920), Lectures on the History of Philosophy (1833-36; trans. 1892-96), Lectures on the Philosophy of Religion (1832; trans. 1895), and Lectures on the Philosophy of History (1837; trans. 1858).

Strongly influenced by Greek ideas, Hegel also read the works of the Dutch philosopher Baruch Spinoza, the French writer Jean Jacques Rousseau, and the German philosophers Immanuel Kant, Johann Gottlieb Fichte, and Schelling. Although he often disagreed with these philosophers, their influence is evident in his writings.

Hegel's aim was to set forth a philosophical system so comprehensive that it would encompass the ideas of his predecessors and create a conceptual framework in terms of which both the past and future could be philosophically understood. Such an aim would require nothing short of a full account of reality itself. Thus, Hegel conceived the subject matter of philosophy to be reality as a whole. This reality, or the total developmental process of everything that is, he referred to as the Absolute, or Absolute Spirit. According to Hegel, the task of philosophy is to chart the development of Absolute Spirit. This involves (1) making clear the internal rational structure of the Absolute; (2) demonstrating the manner in which the Absolute manifests itself in nature and human history; and (3) explicating the teleological nature of the Absolute, that is, showing the end or purpose toward which the Absolute is directed.

Concerning the rational structure of the Absolute, Hegel, following the ancient Greek philosopher Parmenides, argued that ‘what is rational is real and what is real is rational.’ This must be understood in terms of Hegel's further claim that the Absolute must ultimately be regarded as pure Thought, or Spirit, or Mind, in the process of self-development (see Idealism). The logic that governs this developmental process is dialectic. The dialectical method involves the notion that movement, or process, or progress, is the result of the conflict of opposites. Traditionally, this dimension of Hegel's thought has been analyzed in terms of the categories of thesis, antithesis, and synthesis. Although Hegel tended to avoid these terms, they are helpful in understanding his concept of the dialectic. The thesis, then, might be an idea or a historical movement. Such an idea or movement contains within itself incompleteness that gives rise to opposition, or an antithesis, a conflicting idea or movement. As a result of the conflict a third point of view arises, a synthesis, which overcomes the conflict by reconciling at a higher level the truth contained in both the thesis and antithesis. This synthesis becomes a new thesis that generates another antithesis, giving rise to a new synthesis, and in such a fashion the process of intellectual or historical development is continually generated. Hegel thought that Absolute Spirit itself (which is to say, the sum total of reality) develops in this dialectical fashion toward an ultimate end or goal.

For Hegel, therefore, reality is understood as the Absolute unfolding dialectically in a process of self-development. As the Absolute undergoes this development, it manifests itself both in nature and in human history. Nature is Absolute Thought or Being objectifying itself in material form. Finite minds and human history are the process of the Absolute manifesting itself in that which is most kin to itself, namely, spirit or consciousness. In The Phenomenology of Mind Hegel traced the stages of this manifestation from the simplest level of consciousness, through self-consciousness, to the advent of reason.

The goal of the dialectical cosmic process can be most clearly understood at the level of reason. As finite reason progresses in understanding, the Absolute progresses toward full self-knowledge. Indeed, the Absolute comes to know itself through the human mind's increased understanding of reality, or the Absolute. Hegel analyzed this human progression in understanding in terms of three levels: art, religion, and philosophy. Art grasps the Absolute in material forms, interpreting the rational through the sensible forms of beauty. Art is conceptually superseded by religion, which grasps the Absolute by means of images and symbols. The highest religion for Hegel is Christianity, for in Christianity the truth that the Absolute manifests itself in the finite is symbolically reflected in the incarnation. Philosophy, however, is conceptually supreme, because it grasps the Absolute rationally. Once this has been achieved, the Absolute has arrived at full self-consciousness, and the cosmic drama reaches its end and goal. Only at this point did Hegel identify the Absolute with God. ‘God is God,’ Hegel argued, ‘only in so far as he knows himself.’

In the process of analyzing the nature of Absolute Spirit, Hegel made significant contributions in a variety of philosophical fields, including the philosophy of history and social ethics. With respect to history, his two key explanatory categories are reason and freedom. ‘The only Thought,’ maintained Hegel, ‘which Philosophy brings ... to the contemplation of History, is the simple conception of Reason; that Reason is the Sovereign of the world, that the history of the world, therefore, presents us with a rational process.’ As a rational process, history is a record of the development of human freedom, for human history is a progression from less freedom to greater freedom.

Hegel's social and political views emerge most clearly in his discussion of morality (Moralität) and social ethics (Sittlichkeit). At the level of morality, right and wrong is a matter of individual conscience. One must, however, move beyond this to the level of social ethics, for duty, according to Hegel, is not essentially the product of individual judgment. Individuals are complete only in the midst of social relationships; thus, the only context in which duty can truly exist is a social one. Hegel considered membership in the state one of the individual's highest duties. Ideally, the state is the manifestation of the general will, which is the highest expression of the ethical spirit. Obedience to this general will is the act of a free and rational individual. Hegel emerges as a conservative, but he should not be interpreted as sanctioning totalitarianism, for he also argued that the abridgment of freedom by any actual state is morally unacceptable.

At the time of Hegel's death, he was the most prominent philosopher in Germany. His views were widely taught, and his students were highly regarded. His followers soon divided into right-wing and left-wing Hegelians. Theologically and politically the right-wing Hegelians offered a conservative interpretation of his work. They emphasized the compatibility between Hegel's philosophy and Christianity. Politically, they were orthodox. The left-wing Hegelians eventually moved to an atheistic position. In politics, many of them became revolutionaries. This historically important left-wing group included Ludwig Feuerbach, Bruno Bauer, Friedrich Engels, and Karl Marx. Engels and Marx were particularly influenced by Hegel's idea that history moves dialectically, but they replaced Hegel's philosophical idealism with materialism.

Hegel's metaphysical idealism had a strong impact on 19th-century and early 20th-century British philosophy, notably that of Francis Herbert Bradley, and on such American philosophers as Josiah Royce, and on Italian philosophy through Benedetto Croce. Hegel also influenced existentialism through the Danish philosopher Søren Kierkegaard. Phenomenology has been influenced by Hegel's ideas on consciousness. The extensive and diverse impact of Hegel's ideas on subsequent philosophy is evidence of the remarkable range and the extraordinary depth of his thought.

Ludwig Wittgenstein (1889-1951), Austrian-British philosopher, who was one of the most influential thinkers of the 20th century, particularly noted for his contribution to the movement known as analytic and linguistic philosophy.

Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus (1921; trans. 1922), a work he then believed provided the ‘final solution’ to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. posthumously 1953; trans. 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that ‘philosophy aims at the logical clarification of thoughts.’ In the Philosophical Investigations, however, he maintained that ‘philosophy is a battle against the bewitchment of our intelligence by means of language.’

Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analyzed into less complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analyzed into less complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or ‘states of affairs.’ He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science - are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

Existentialism, was the philosophical movement or holding to tendency, and emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, ‘I must find a truth that is true for me . . . the idea for which I can live or die.’ Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.

All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that it is spiritually crucial to recognize that one experiences not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a ‘leap of faith’ into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.

Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the ‘death of God’ and went on to reject the entire Judeo-Christian moral tradition in favor of a heroic pagan ideal.

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a ‘futile passion.’ Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.

Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), ‘We must love life more than the meaning of it.’

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theater of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

Arthur Schopenhauer (1788-1860) the German philosopher, who is known for his philosophy of pessimism, born in Danzig (now Gdańsk, Poland), February 22, 1788, Schopenhauer was educated at the universities of Göttingen, Berlin, and Jena. He then settled in Frankfurt am Main, where he led a solitary life and became deeply involved in the study of Buddhist and Hindu philosophies and mysticism. He was also influenced by the ideas of the German Dominican theologian, mystic, and eclectic philosopher Meister Eckhart, the German theosophist and mystic Jakob Boehme, and the scholars of the Renaissance and the Enlightenment. In his principal work, The World as Will and Idea (1819; trans. 1883), he proposed the dominant ethical and metaphysical elements of his atheistic and pessimistic philosophy.

Schopenhauer disagreed with the school of idealism and was strongly opposed to the ideas of the German philosopher Georg Wilhelm Friedrich Hegel, who believed in the spiritual nature of all reality. Instead, Schopenhauer accepted, with some qualification in details, the view of the German philosopher Immanuel Kant that phenomena exist only insofar as the mind perceives them, as ideas. He did not, however, agree with Kant that the ‘thing-in-itself’ (Ding an sich), or the ultimate reality, lies hopelessly beyond experience. He identified it with experienced will instead. According to Schopenhauer, however, will is not limited to voluntary action with foresight; all the experienced activity of the self is will, including unconscious physiological functionings. This will is the inner nature of each experiencing being and assumes in time and space the appearance of the body, which is an idea. Starting from the principle that the will is the inner nature of his own body as an appearance in time and space, Schopenhauer concluded that the inner reality of all material appearances is will; the ultimate reality is one universal will.

For Schopenhauer the tragedy of life arises from the nature of the will, which constantly urges the individual toward the satisfaction of successive goals, none of which can provide permanent satisfaction for the infinite activity of the life force, or will. Thus, the will inevitably leads a person to pain, suffering, and death and into an endless cycle of birth, death, and rebirth, and the activity of the will can only be brought to an end through an attitude of resignation, in which the reason governs the will to the extent that striving ceases.

This conception of the source of life in will came to Schopenhauer through insights into the nature of consciousness as essentially impulsive. He revealed a strong Buddhist influence in his metaphysics and a successful confluence of Buddhist and Christian ideas in his ethical doctrines. From the epistemological point of view, Schopenhauer's ideas belonged to the school of phenomenology.

Renowned for his hostile attitude toward women, Schopenhauer subsequently applied his insights to a consideration of the principles underlying human sexual activity, arguing that individuals are driven together not by feelings of sentimental love but by the irrational impulses of the will. The influence of Schopenhauer's philosophy may be seen in the early works of the German philosopher and poet Friedrich Wilhelm Nietzsche, in the music dramas of the German composer Richard Wagner, and in much of the philosophical and artistic work of the 20th century. Schopenhauer died September 21, 1860.

Friedrich Nietzsche (1844-1900), German philosopher, poet, and classical philologist, who was one of the most provocative and influential thinkers of the 19th century. Nietzsche was born in Röcken, Prussia. His father, a Lutheran minister, died when Nietzsche was five, and Nietzsche was raised by his mother in a home that included his grandmother, two aunts, and a sister. He studied classical philology at the universities of Bonn and Leipzig and was appointed professor of classical philology at the University of Basel at the age of 24. Ill health (he was plagued throughout his life by poor eyesight and migraine headaches) forced his retirement in 1879. Ten years later he suffered a mental breakdown from which he never recovered. He died in Weimar in 1900.

In addition to the influence of Greek culture, particularly the philosophies of Plato and Aristotle, Nietzsche was influenced by German philosopher Arthur Schopenhauer, by the theory of evolution, and by his friendship with German composer Richard Wagner.

Nietzsche’s first major work, Die Geburt der Tragödie aus dem Geiste de Musik (The Birth of Tragedy), appeared in 1872. His most prolific period as an author was the 1880s. During the decade he wrote Also sprach Zarathustra (Parts I-III, 1883-1884; Part IV, 1885; translated as Thus Spake Zarathustra); Jenseits von Gut und Böse (1886; Beyond Good and Evil); Zur Genealogie de Moral (1887; On the Genealogy of Morals); Der Antichrist (1888; The Antichrist); and Ecce Homo (completed 1888, published 1908). Nietzsche’s last major work, The Will to Power (Der Wille zur Macht), was published in 1901.

One of Nietzsche’s fundamental contentions was that traditional values (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation God is dead. He was convinced that traditional values represented a slave morality, a morality created by weak and resentful individuals who encouraged such behavior as gentleness and kindness because the behavior served their interests. Nietzsche claimed that new values could be created to replace the traditional ones, and his discussion of the possibility led to his concept of the overman or superman.

According to Nietzsche, the masses (whom he termed the herd or mob) conform to tradition, whereas his ideal overman is secure, independent, and highly individualistic. The overman feels deeply, but his passions are rationally controlled. Concentrating on the real world, rather than on the rewards of the next world promised by religion, the overman affirms life, including the suffering and pain that accompany human existence. Nietzsche’s overman is a creator of values, a creator of a master morality that reflects the strength and independence of one who is liberated from all values, except those that he deems valid.

Nietzsche maintained that all human behavior is motivated by the will to power. In its positive sense, the will to power is not simply power over others, but the power over oneself that is necessary for creativity. Such power is manifested in the overman's independence, creativity, and originality. Although Nietzsche explicitly denied that any overmen had yet arisen, he mentions several individuals who could serve as models. Among these models he lists Jesus, Greek philosopher Socrates, Florentine thinker Leonardo da Vinci, Italian artist Michelangelo, English playwright William Shakespeare, German author Johann Wolfgang von Goethe, Roman ruler Julius Caesar, and French emperor Napoleon I.

The concept of the overman has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche's work.

An acclaimed poet, Nietzsche exerted much influence on German literature, as well as on French literature and theology. His concepts have been discussed and elaborated upon by such individuals as German philosophers Karl Jaspers and Martin Heidegger, and German Jewish philosopher Martin Buber, German American theologian Paul Tillich, and French writers Albert Camus and Jean-Paul Sartre. After World War II (1939-1945), American theologians Thomas J. J. Altizer and Paul Van Buren seized upon Nietzsche's proclamation God is dead in their attempt to make Christianity relevant to its believers in the 1960s and 1970s.

Pragmatism, a philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

`Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle,’ for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian worldview in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey’s - an alternative to Rorty’s interpretation of the tradition.

In an ever changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

In philosophy, the concepts with which we approach the world themselves become the topic of enquiry. As philosophy of a discipline such as history, physics, or haw seeks not to much to solve historical, physical or legal questions, as to study the conceptual representations that are fundamental structure such thinking, in this sense philosophy is what happens when a practice becomes dialectically self-conscious. The borderline between such ‘second-order’ reflection, and ways of practicing the first-order discipline itself, as not always clear: Philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection, in meaning that the kinds of self-conscious reflection making up philosophy to occur only when a way of life is sufficiently mature to be already passing, but neglects the fact that self-consciousness and reflection co-exist with activity, e.g., an actively social and political movement will co-exist with reflection on the categories within which it frames its position.

At different times that have been more or less optimism about the possibility of a pure ‘first philosophy’, taking a deductive assertion as to give a standpoint from which other intellectual practices can be impartially assessed and subjected to logical evaluation and correction. This standpoint now seems to many philosophers to be a phantasy. The contemporary spirit of the subject is hostile to such possibility, and prefers to see philosophical reflection as continuos with the best practice if any field of intellectual enquiry.

The principles that lie at the basis of an enquiry are representations that inaugurate the first principles of one phase of enquiry only to employ the gainful habit of being rejected at other stages. For example, the philosophy of mind seeks to answer such questions as: Is mind distinct from matter? Can we give principle reasons for deciding whether other creatures are conscious, or whether machines can be made so that thy are conscious? What are thinking, feeling, experience, remembering? Is it useful to divide the function of the mind up, separating memory from intelligences, or rationally from sentiment, or do mental functions from an ingoted whole? The dominated philosophies of mind in the current western tradition include, a variety of physicalism and tradition include various fields of physicalism and functionalism. For particular topics are in favorr to the spoken exchange.

Once said, of the philosophy of language, was that the general attempt to understand the components of a working language, the relationship that an understanding speaker has to its elements, and the relationship they bear to the world: Such that the subject therefore embraces the traditional division of ‘semantic’ into ‘syntax’, semantic, and pragmatics. The philosophy of mind, since it needs an account of what it is in our understanding that enable us to use language. It also mingles with the metaphysics of truth and the relationship between sign and object. Such a philosophy, especially in the 20th century, has been informed by the belief that a philosophy of language is the fundamental basis of all philosophical problems in that language is the philological problem of mind, and the distinctive way in which we give shape to metaphysical briefs of logical form, and th basis of the division between syntax and semantics, as well a problems of understanding the number and nature of specifically semantic relationships such as ‘meaning’, ‘reference’, ‘predication’, and quantification. Pragmatics include the theory of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

A formal system for which a theory whose sentences are well-formed formula of a logical calculus, and in which axioms or rules of being of a particular term corresponds to the principles of the theory being formalized. The theory is in tended to be couched or framed in the language of a calculus, e.g., fist-order predicate calculus. Set theory, mathematics, mechanics, and many other axiomatic that may be developed formally, thereby making possible logical analysis of such matters as the independence of various axioms, and the relations between one theory and another.

Is term of logical calculus is also called a formal language, and a logical system. A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count ss proofs. A system may include axioms for which leaves terminate a proof, however, it shows of the prepositional calculus and the predicated calculus.

Its most immediate of issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seems to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth become undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.

As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undecidable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics concludes eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.

Fixed by its will for and of itself, the mere mitigated scepticism which accepts everyday or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used. Descartes himself was not a sceptic, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.

For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they claim that certain knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true. It has often been thought, that any thing known must satisfy certain criteria as well as being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of considerations that will make such standards in the apparent or justly conclude in accepting it warranted to some degree.

Besides, there is another view - the absolute globular view that we do not have any knowledge whatsoever. In whatever manner,

it is doubtful that any philosopher seriously entertains of an absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident is any belief that requires evidences because it is warranted.

René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. Its challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.

All the same, Pyrrhonism and Cartesian form of a virtual globular scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will suggest that no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. Whereby, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.

A Cartesian requires certainty. A Pyrrhonist merely requires that the standards in case are more warranted then its negation.

Cartesian scepticism was by an unduly influence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist are the agnostics, the Cartesian sceptic is the atheist.

Because the Pyrrhonist require much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.

The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future, that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like manner, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not fordone.

Repudiating the requirements of absolute certainty or knowledge, insisting on the connection of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-unconductiveness of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of a gathering in their own purposive latencies, yet we are given to the spoken word for which a dialectic awareness sparks the flame from the ambers of fire.

Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.

It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ is certain, or we can say that its descendable alinement are aligned as of ‘p’, is certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.

In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth.. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.

However, in moral theory, the view that there are inviolable moral standards or absolute variable human desires or policies or prescriptions.

In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only given some antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination. If one has no desire to look wise the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: ‘act only on that maxim through which you can at the same times will that it should become universal law: (2) the formula of the law of nature: ‘act as if the maxim of your action were to become through your will a universal law of nature’: (3) the formula of the end-in-itself: ‘act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) the formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.

Even so, a proposition that is not a conditional ‘p’. Moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) = if ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such ad gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that is, are force fields purely potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space hat differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.

The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Although his equal hostility to ‘action at a distance’ muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom influenced the scientist Faraday, with whose work the physical notion became established. In his paper ‘On the Physical Character of the Lines of Magnetic Force’ (1852). Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electro-magnetic lines of force was evidence for the physical reality of the intervening medium.

Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant’s doctrine, and continues to play an influencing role in the theory of meaning and truth.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualists insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. Thought, he held, assists us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.

Such an approach, however, set’s James’ theory of meaning apart from verification, dismissive of metaphysics. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his ,metaphysical standard of value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments James did not hold that even his broad set of consequences were exhaustive of a terms meaning. ‘Theism’, for example, he took to have antecedent, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We except an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant ti the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.

To a greater extent, and most importantly, is the famed apprehension of the pragmatic principle, in so that, Pierces’s account of reality: When we take something to be rea, that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘P’, then I except that if anyone were to inquire depthfully into the finding its measure into whether ‘p’, they would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entitles posited by the relevant discourse that exist or at least exists: The standard example is ‘idealism’, that reality id somehow mind-curative or mind-co-ordinated - that real object comprising the ‘external world’ are not independently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of a formative constellation and not of any mere understanding of the nature of the ‘real’ bit even the resulting charger we attribute to it.

Wherefore, the term ids most straightforwardly used when qualifying another linguistic form of grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that non-existence of all things, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of ting that is all around us, when in fact it merely denies that the predicate ‘is all around us’ has appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothing, is not properly the experience of nothing, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’’ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter thinks that there is nothing to be afraid of.

A rather different set of concerns arise when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of tis dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centres round Anthony Dummett (1925), to which is borrowed from the ‘intuitionistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of bivalence’ is the trademark of ‘realism’. However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it wad only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world asa whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism has been from philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantifies itself ads an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for its crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is, therefore unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only an individual.

Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in th distribution of exemplification of properties.

The philosophical ponderance over which to set upon the unreal, as belonging to the domain of Being. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject as Being by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, nd as long history of attempts to explain contingent existence, by which id to reference and a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good or God, but whose relation with the everyday world remains obscure. The celebrated argument for the existence of God first propounded by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on something else. The totality of dependent brings must then itself depend upon a non-dependent, or necessarily existent bring of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely arises gain. So the ‘God’ that ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of id quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to define something as unsurmountably great, if it exists and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being exists. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can device necessarily ‘p’. A symmetrical proof starting from the assumption that it is possible that such a being not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of the omission the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context ,may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

The double effect of a principle attempting to define when an action that had both good and bad results is morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential affects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself doe not perish (pricking is a loss of form).

And is, therefore, in some sense available to reactivate a new body., therefore, not I who survives body death, but I ma y be resurrected in the same personalized bod y that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficult ast this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable ‘myth of the given

The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way , arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, came Gottfried Herder (1744-1803),and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as wi witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given a extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engine of historical change. The idea is readily intelligible in that there world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, equates with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at its mst successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, itself is such that speculations upon the history may that it be continued to be written, notably: late examples, by the late 19th century large-scale speculation of tis kind with the nature of historical understanding, and in particular with a comparison between the ,methos of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. as history are objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own.. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the verstehe approach, but it is nonetheless, the explanation from there actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions , as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation

in or thereby an understanding of what they experience and thought.

The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables ne to construct these interpretations as explanations of their doings. The view is commonly hld along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirical evince that is in principle describable without them, as liable to be overturned by newer and better theories, and o on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation ‘in their moccasins’, or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the ‘verstehen’ tradition associated with Dilthey, Weber and Collngwood.

Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s account, a person has no privileged self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further he levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance; of five arguments: The are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the wold demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analog y , God reveals of himself is not himself.

The immediate problem availed of ethics is posed b y the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to itself, it will enter the branch with its five employs that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving yourself in ways that responsibility ends in a death of one person? After all, whom have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.

Describing events that haphazardly happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by;’ dong another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created by and for itself. Kant cites the example o a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events, Hume thought, are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not to perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects ids largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples o f puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent states of nature ‘N’, and a law of nature ‘L’, such that given L, N will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ an d the laws. Since determinism is universal these in turn are fixed, and so backwards to events for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should ant from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is a more substantiative, real notion of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it ids only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if an action is not the end of such a chain, then either to or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for its ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia bad.

A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional or voluntary action, as well of mere behaviour. The theory that there are such acts is problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom, is that in ascendance with universal moral law and regardless of selfish advantage.

A categorical notion in the work as contrasted in Kantra ethics show of a hypothetical imperative that embeds of a commentary which is in place only given some antecedent desire or project. ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination: If one has no desire to look wise the injunction or advice lapses. A categorical imperative cannot be so avoided, it is a requirement that binds anybody, regardless of their inclination,. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: ‘act only on that maxim through which you can at the same time will that it should become universal law’, (2) the formula of the law of nature: ‘Act as if the maxim of your action were to become through your will a universal law of nature’, (3) the formula of the end-in-itself, ‘Act in such a way that you always trat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end’, (4) the formula of autonomy, or consideration; ’the will’ of every rational being a will which makes universal law’, and (5) the formula of the Kingdom of Ends, which provides a model for systematic union of different rational beings under common laws.

A central object in the study of Kant’s ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own application of the notions are always convincing: One cause of confusion is relating Kant’s ethical values to theories such as ;expressionism’ in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’:.But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.

Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage that I restart morality to systems such as that of Kant, based on notions given as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian,. And Aristotle as more involved with a separate sphere of responsibility and duty, than the simple contrast suggests.

A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, a prime task as to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of ourselves.

In some moral systems, notably that of Immanuel Kant, real moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness , through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a situation that weigh on one’s side or another.

As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he were considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in themselves, such as of ‘utilitarianism’, to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.

Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). Th status of these laws may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.

In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of th Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism. Its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of ‘natural usages’ or by reason itself, additionally, (in religious verses of them), that express of God’s will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God’s will. Grothius, for instance, sides with the view that the content of natural law is independent of any will, including that of God.

While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the De Jure Naturae et Gentium, 1672, and its English translation is ‘Of the Law of Nature and Nations, 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific ‘mathematical’ treatment on ethics and law, free from the tainted Aristotelian underpinning of ‘scholasticism’. Like that of his contemporary - Locke. His conception of natural laws include rational and religious principles, making it only a partial forerunner of more resolutely empiricist and political treatment in the Enlightenment.

Pufendorf launched his explorations in Plato’s dialogue ‘Euthyphro’, with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods crates goodness and value. Even if this is intelligible it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct from is will, but not distinct from him.

The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call good those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, are truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?

The natural aw tradition may either assume a stranger form, in which it is claimed that various facts entails of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Knt, these requirements are supposed binding on all human beings, regardless of their desires.

The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed ‘synderesis’ (or, syntetesis) although traced to Aristotle, the phrase came to the modern era through St Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simple and immediate grasp of first moral principles. Conscience, by contrast, is ,more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.

It is, nevertheless, the view interpreted within he particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for ‘rational’ schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme include the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notably the idealism of Bradley, there ids the same doctrine that change is contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step towards this end may be to see time itself not as an infinite container within which discrete events are located, bu as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton’s Absolutist pupil, Clarke.

Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense in which it applies to species quickly links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity,. The associations of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with he rest of hat we would call the natural world, including women, slaves, children and other species, not quite making it.

Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background ie the Pythagorean conception o f frm as he key to physical nature, bu also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remember for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.

The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom loses its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy , regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast with in integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of ass distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.

Different conceptions of nature continue to have ethical overtones, foe example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a socially variable and potentially distorting picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to he relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.

In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits. At its silliest the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.

The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.

The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and fo r finding criteria for assessing various genetic stories that might provide useful explanations.

Among the features that are proposed for this kind o f explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it ma y be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.

Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903),. His first major work was the book Social Statics (1851), which advocated an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole system wooden, as if knocked together out of cracked hemlock.

The premise is that later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggle, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

In that, the study of the say in which a variety of higher mental function may be adaptions applicable of a psychology of evolution, a formed in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who free-ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.

For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and one’s self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and style of his writing continue to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).

Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath was Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which ids known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854) foregathers nature of becoming a creative spirit whose aspiration is ever further and more to completed self-realization. Although a movement of more general to naturalized imperative. Romanticism drew on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegal (1770-1831) and of absolute idealism.

Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or th world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.

Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provide a justification for aggressive personal and political relations, or the idea that it is a women’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotype, and is a proper target of much ‘feminist’ writing.

This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a mans to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that thing consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.

Many concerns and disputed cluster around the idea associated with the term ‘substance’. The substance of a thin may be considered in: (1) Its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notion of substances tend to disappear in empiricist thought in fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of instance of qualities, not of quantities themselves. So the problem of what it is for a value quality to be the instance that remains.

Metaphysics inspired by modern science tends to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.

It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but deriving from the 1st century rhetorical treatise On the Sublime, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of that objects, and is filled with one grand sensation, which totally possessing it, composes it into a solemn sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.

In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible might and power. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of ourselves as transcending nature, than in an awareness of ourselves as a frail and insignificant part of it.

Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosophers George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.

The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716), that if a person had any other attributes that the ones he has, he would not have been the AME person. Leibniz thought that when asked hat would have happened if Peter had not denied Christ. That being that if I am asking what would have happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name ‘Peter’ might be understood as ‘what is involved in those attributes [of Peter] from which the denial does not follow’. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances. The relations of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unit all the , ‘relations of ideas’ and ‘matter of fact ‘ (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.

In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called “Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centauries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It ids in this sense that the English philosopher John Locke (1632-1704) who believed that theological and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.

A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be true.

The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers. But an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of 1 is the irrational number Ã.

The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.

In the 20th century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-color theorem. This theorem states that four colors are sufficient to color any map in such a way that regions with a common boundary line have different colors. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.

The study of the relations of deductibility among sentences in a logical calculus which benefits the prof theory. Deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Godel’s second incompleteness theorem.

What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢ B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.

The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never meet) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.

The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.

The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.

The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behavior. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.

In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision making are also amenable to such study.

Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries is not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.

All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers bark’ is false.

When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their workings are supposed analogous to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.

Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. There latter are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to there deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are scientifically tractable, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size,. And mobility are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.

Continuing as such, is the doctrine advocated by the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge either that the notion fails to fit with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.

The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it was the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘it ought to be the case that ‘p’, or ‘it is permissible that ‘p’, and the of necessity and possibility.

The aim of a logic is to make explicit the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of answer is that if we do not we contradict ourselves(or, strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or fer set of beliefs.) There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such hat anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century., and has become increasingly recognized in the 20th century, in that finer work that were done within that tradition, but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values, these form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatment of a logical system ass an abreact mathematical structure, or algebraic, has been heralded by the English mathematician and logician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.

The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that ds not occur in the conclusion is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). So the first premise of the example in the minor premise the second the major term. So the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enable syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.

Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been reargued actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the may range over predicate and functions themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus, it may be defined by law that χ = y iff (∀F)(Fχ↔Fy), which gives grater expressive power for less complexity.

Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.

The imparting information has been conduced or carried out of the prescribed procedures, as impeding of something that tajes place in the chancing encounter out to be to enter ons’s mind may from time to time occasion of various doctrines concerning th necessary properties, ;east of mention, by adding to a prepositional or predicated calculus two operator, □ and ◊ (sometimes written ‘N’ and ‘M’),meaning necessarily and possible, respectfully. These like ‘p ➞ ◊p and □p ➞ p will be wanted. Controversial these include □p ➞ □□p (if a proposition is necessary,. It its necessarily, characteristic of a system known as S4) and ◊p ➞ □◊p (if as preposition is possible, it its necessarily possible, characteristic of the system known as S5). The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician Sig Kanger, involves valuing prepositions not true or false simpiciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and possibility to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.

In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening te door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.

One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable. In that, in formal studies, a semantics is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds have on the truth conditions of sentences containing them.

Holding that the basic casse of reference is the relation between a name and the persons or object which it names. The philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description an what it describes, or that between myself or the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approach, searching for a more substantive possibly that causality or psychological or social constituents are pronounced between words and things.

However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family, Berry, Richard, etc. form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type sem to depend upon an element of self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathological self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient. In allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still th possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.

Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and non has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations o vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary make an agreement valid, or a position tenable, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philologer and historian George Collingwood (1889-1943), announces hat any proposition capable of truth or falsity stand on bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is fond, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries coss, and there is some consensus that at least whowhere definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.

Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carry an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.

It is, nonetheless, that we find in classical logic a proposition that may be true or false,. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogues between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.

Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with its associated, but different truth-predicate. Whist this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to ay everything that there is to say, and other approaches have become increasingly important.

So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantives theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Taken to be the view, inferential semantics take on the role of sentence in inference give a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.

Moreover, a theory of semantic truth be that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.

The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoses, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms e.g., quark, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.

All the while, both Frége and Ramsey are agreed that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct contexts, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true preposition. For example, the second ma y translate as ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms. Along with a discredited ‘objective’ conception of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.

Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or join of something might that there be more so as to a larger combination for us to consider the simplest formulation , is that the claim that expression of the form ‘S is true’ mean the same as expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ id Tue, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and tis is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.

The relationship between a set of premises and a conclusion when the conclusion follows from the premise,. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.

From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is , a it were, a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develops a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. THE Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggle, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once again, the psychology proving attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive , our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The term of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it id also clear that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiments. The eventual result of the competition between each of the other, will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living. In thus way. Man’s imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering law are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it ma y not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements we make of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship th understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form,. And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions need not and should not be advanced as being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of sentence in the language, and must have some idea of the insufficiencies of various kinds of speech act. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentence differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of th initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of th way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating th conditions under which arbitrary atomic sentences containing it are true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of he semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. Its conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of ruth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and - confusing and inconsistently if this article is correct - Frége himself. but is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’.

Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form ‘if p were to happen q would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, use=ful ‘if you broken the bone, the X-ray would have looked different’, or ‘if the reactor were to fail, this mechanism wold click in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals comes out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach needs has proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness tat the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.

The pronouncing of any conditional; preposition of the form ‘if p then q’. The condition hypothesizes, ‘p’. Its called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with not-p. or q. stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentence ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for example, belief in God, are the widest sense of the works satisfactorially in the widest sense of the word. On James’s view almost any belief might be respectable, and even rue, provided it works (but working is no simple matter for James). The apparently subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20 century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an ‘automatic sweetheart’ or female zombie) and remarks hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.

Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who have usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because belief have effects, as they work. Pragmatism can be found in Kant’s doctrine of the primary of practical over pure reason, and continues to play an influential role in the theory of meaning and of truth.

In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdates, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what effects it is likely to have on behaviour, then we would have done all tat is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or ‘realization’ of the program the machine is running. The principle advantage of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items tat do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to different from our own, it may then seem as though beliefs and desires can be ‘variably realized’ causal architecture, just as much as they can be in different neurophysiological states.

The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truth is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

The Association for International Conciliation first published William James’s pacifist statement, 'The Moral Equivalent of War', in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism - a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represent standards of the time.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty’s interpretation of the tradition.

The Philosophy of Mind, is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysics is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.

The 18th-century German philosopher Immanuel Kant published his influential work The Critique of Pure Reason in 1781. Three years later, he expanded on his study of the modes of thinking with an essay entitled 'What is Enlightenment'? In this 1784 essay, Kant challenged readers to 'dare to know', arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.

open sidebar

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to experience makes these categories or structural principles transcendental; they transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Some of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism in opposition to Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories are radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce - phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer, emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; according to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism the will is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analyzed in terms of actual or possible occurrences, or phenomena, and that anything that cannot be analyzed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.

In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists (see Analytic and Linguistic Philosophy; Positivism) and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as 'Nothing exists except material particles' and 'Everything is part of one all-encompassing spirit' cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.

No comments:

Post a Comment