January 20, 2010

-page 9-

Dissolving matter into energy makes neither of them any less physical. And the mark of the physical, as Descartes had pointed out, is that it is extended in space. Despite the insuperable problems with his dualism, Descartes' key insight remains valid: What distinguishes mind from matter is precisely that it does not occupy space. And this distinction holds just as fast between mind and energy-even so-called subtle energy (hypothetical "subtle energy" bodies are described as having extension, and other spatial attributes such as waves, vibrations, frequencies). Energy, even in the form of infinitesimal quanta or "subtle vibrations," still occupies space. And any theory of energy as a field clearly makes it spatial. Notions of "quantum consciousness" or "field consciousness"-and Woodhouse's "vibrations," "ripples," or "waves" of consciousness-therefore, are no more than vacuous jargon because they continue to fail to address the very distinction that Descartes formulated nearly four hundred years ago. But that's not even the most troublesome deficiency of energy talk. Even supposing physicists were able to show that quanta of energy did not occupy space; suppose the behaviour of quanta was so bizarre that they could do all sorts of "un-physical" things-such as transcend space and time; even if it could be shown that quanta were not "physical" in Descartes' sense . . . even supposing all of this, any proposed identity between energy and consciousness would still be invalid.


Energy talk fails to account for what is fundamentally most characteristic about consciousness, namely its subjectivity. No matter how fine-grained, or "subtle," energy could become, as an objective phenomenon it could never account for the fact of subjectivity-the "what-it-feels-like-from-within-experience." Ontologically, subjectivity just cannot emerge from wholly objective reality. Unless energy, at its ontologically most fundamental level, already came with some form of proto-consciousness, proto-experience, or proto-subjectivity, consciousness, experience, or subjectivity would never emerge or evolve in the universe.

Which brings us to Woodhouse's "energy monism" model, and the notion that "consciousness is the 'inside' of energy throughout the universe." Despite Dossey's criticism of this position, I think Woodhouse is here proposing a version of the only ontology that can account for a universe where both matter-energy and consciousness are real. He briefly summarizes why dualism, idealism, and materialism cannot adequately account for a universe consisting of both matter/energy and consciousness. (He adds "epiphenomenalism" to these three as though it were a distinct ontology. It is not. Epiphenomenalism is a form of property dualism, which in turn is a form of materialism.) He then proceeds to outline a "fifth" alternative: "energy monism." And although I believe his fundamental insight is correct, his discussion of this model in terms of double-aspectism falls victim to a common error in metaphysics: He confuses epistemology with ontology.

Woodhouse proposes that the weaknesses of the other ontologies-dualism, idealism, and materialism-can be avoided by adopting a "double-aspect theory which does not attempt to reduce either energy or consciousness to the other." And he goes on to build his alternative ontology on a double-aspect foundation. Now, I happen to be highly sympathetic with double-aspectism: It is a coherent and comprehensive (even "holistic") epistemology. As a way of knowing the world, double-aspectism opens up the possibility of a complementarity of subjective and objective perspectives.

But a perspective on the world yields epistemology-it reveals something about how we know what we know about the world. It does not reveal the nature of the world, which is the aim of ontology. Woodhouse makes an illegitimate leap from epistemology to ontology when he says, "This [energy monism] is a dualism of perspective, not of fundamental stuff," and concludes that "each is the other." Given his epistemological double-aspectism, the best Woodhouse can do is claim to be an ontological agnostic (as, in fact, Dossey does). He can talk about viewing the world from two complementary perspectives, but he cannot talk about the nature of the world in itself. Certainly, he cannot legitimately conclude from talk about aspects or perspectives that the ultimate nature of the world is "energy monism" or that "consciousness is energy." Epistemology talk cannot yield ontology talk-as Kant, and later Bohr, were well aware. Kant said we cannot know the thing-in-itself. The best we can hope for is to know some details about the instrument of knowing. Bohr said that the task of quantum physics is not to describe reality as it is in itself, but to describe what we can say about reality.

The issue of whether energy talk is appropriate for consciousness is to be resolved ontologically not epistemologically. At issue is whether consciousness is or is not a form of energy-not whether it can be known from different perspectives. If it is a form of energy, then energy talk is legitimate. If not, energy talk is illegitimate. But the nature of consciousness is not to be "determined by perspective," as Woodhouse states: "insides and outsides are determined by perspectives." If "insides" (or "outsides") were merely a matter of perspective, then any ontology would do, as long as we allowed for an epistemological dualism or complementarity (though, of course, the meaning of "inside" and "outside" would differ according to each ontology). What Woodhouse doesn't do (which he needs to do to make his epistemology grow ontological legs) is establish an ontology compatible with his epistemology of "inside" and "outside." In short, he needs to establish an ontological distinction between consciousness and energy. But this is precisely what Woodhouse aims to avoid with his model of energy monism. Dossey is right, I think, to describe energy talk about consciousness as a legacy of Newtonian physics (i.e. of visuo-kinesthetic mechanics); and this applies equally to "classical energy talk," "quantum-energy talk," "subtle-energy talk," and Woodhouse's "dual-aspect energy talk." In an effort to defend energy talk about consciousness, Woodhouse substitutes epistemology for ontology, and leaves the crucial issue unresolved.

Unless Woodhouse is willing to ground his double-aspect epistemology in an ontological complementarity which distinguishes mind from matter, but does not separate them, he runs the risk of unwittingly committing "reductionism all over again"-despite his best intentions. In fact, Woodhouse comes very close to proposing just the kind of complementary ontology his model needs: "Consciousness isn't just a different level or wave form of vibrating energy; it is the 'inside' of energy-the pole of interiority perfectly understandable to every person who has had a subjective experience of any kind" (emphasis added). This is ontology talk, not epistemology talk. Woodhouse's error is to claim that the distinction "inside" (consciousness) and "outside" (energy) is merely a matter of perspective.

In order to defend his thesis of "energy monism," Woodhouse seems to want it both ways. On the one hand, he talks of consciousness and energy being ontologically identical"each is the other"; on the other, he makes a distinction between consciousness and energy-"energy is the 'outside' of consciousness and consciousness is the 'inside' of energy. He attempts to avoid the looming contradiction of consciousness and energy being both "identical yet distinct" by claiming that the identity is ontological while the distinction is epistemological. But the distinction cannot be merely epistemological-otherwise, as already pointed out, any ontology would do. But this is clearly not Woodhouse's position. Energy monism, as proposed by Woodhouse, is an ontological claim. Woodhouse admits as much when he calls energy monism "a fifth alternative" to the ontologism of dualism, idealism, materialism (and epiphenomenalism [sic]) which he previously dismissed.

Furthermore, Woodhouse's "inside" and "outside" are not merely epistemological when he means them to be synonyms for "subjectivity" and "objectivity" respectively. Although subjectivity and objectivity are epistemological perspectives, they are not only that. Subjectivity and objectivity can have epistemological meaning only if they refer to an underlying ontological distinction-between what Sartre (1956) called the "for-itself" and the "in-itself," between that which feels and that which is felt. Despite his claims to the contrary, Woodhouse's distinction between "inside" and "outside" is ontological-not merely epistemological. And as an ontological distinction between consciousness and energy, it is illegitimate to conclude from his double-aspect epistemology the identity claim that "consciousness is energy." Woodhouse's consciousness-energy monism confusion, it seems to me, is a result of: (1) a failure to distinguish between non-identity and separation, and (2) a desire to avoid the pitfalls of Cartesian dualism. The first is a mistake, the second is not-but he conflates the two. He seems to think that if he allows for a non-identity between consciousness and energy this is tantamount to their being ontologically separate (as in Cartesian dualism). But (1) does not entail (2): Ontological distinction does not entail separation. It is possible to distinguish two phenomena (such as the form and substance of a thing), yet recognize them as inseparable elements of a unity. Unity does not mean identity; and distinction does not mean separation. (I will return to this point shortly.) This muddle between epistemology and ontology is my major criticism of Woodhouse's position. Though if he had the courage or foresight to follow through on his epistemological convictions, and recognize that his position is compatible with (and would be grounded by) an ontological complementarity of consciousness and energy.

The ontological position implicit (though explicitly denied) in Woodhouse's double-aspect model-where consciousness ("inside") and energy ("outside") are actual throughout the universe is none other than panpsychism, or what has been variously called pan experientialism (Griffin, 1997) and radical materialism (de Quincey, 1997). It is the fourth alternative to the major ontologism of dualism, idealism, and materialism, and has a very long lineage in the Western philosophical tradition-going all the way back to Aristotle and beyond to the Presocratics. Woodhouse does not acknowledge any of this lineage, as if his double-aspect model was a novel contribution to the mind-matter debate. Besides Aristotle's hylemorphism, he could have referred to Leibniz' monads, Whitehead's "actual occasions," and de Chardin's "tangential energy" and the "within" as precursors to the distinction he makes between "inside" and "outside." This oversight weakens the presentation of his case. Of course, to have introduced any or all of these mind-body theories would have made Woodhouse's ontological omission all the more noticeable.

One other weakness in Woodhouse's article is his reference to the Perennial Philosophy and the Great Chain of Being as supportive of energy talk that unites spiritual and physical realities. "The non-dual Source of some spiritual traditions . . . is said to express itself energetically (outwardly) on different levels in the Great Chain of Being (matter being the densest form of energy) . . ." Woodhouse is here referring to the many variations of idealist emanationism, where spirit is said to pour itself forth through a sequence of ontological levels and condense into matter. But just as I would say Woodhouse's energy monism unwittingly ultimately entails physicalist reductionism, my criticism of emanationism is that it, too, ultimately "physicalizes" spirit-which no idealist worth his or her salt would want to claim. Energy monism runs the same risk of "physicalizing" spirit as emanationism. So I see no support for Woodhouse's position as an alternative to dualism or materialism coming from the Perennial Philosophy. Both run the risk of covert dualism or covert materialism.

Dossey's critique of Woodhouse's energy monism and energy talk, particularly his caution not to assume that the "nonlocal" phenomena of quantum physics are related to the "nonlocal" phenomena of consciousness and distant healing other than a commonalty of terminology is sound. The caution is wise. However, his critique of Woodhouse's "inside" and "outside" fails to address Woodhouse's confusing epistemology and ontology. If Dossey saw that Woodhouse's intent was to confine the "inside/outside" distinction to epistemology, he might not have couched his critique in ontological terms. Dossey says, "By emphasizing inside and outside, interior and exterior, we merely create new boundaries and interfaces which require their own explanations." The "boundaries and interfaces" Dossey is talking about are ontological, not epistemological. And to this extent, Dossey's critique misses the fact that Woodhouse is explicitly engaged in epistemology talk. On the other hand, Dossey is correct to assume that Woodhouse's epistemological distinction between "inside and outside" necessarily implies an ontological distinction-between "inside" (consciousness) and "outside" energy.

Dossey's criticism of Woodhouse's energy monism, thus, rests on an ontological objection: Even if we do not yet have any idea of how to talk ontologically about consciousness, we at least know that (despite Woodhouse's contrary claim) consciousness and energy are not ontologically identical. There is an ontological distinction between "inside/consciousness" and "outside/energy." Thus, Dossey concludes, energy talk (which is ontological talk) is inappropriate for consciousness. On this, I agree with Dossey, and disagree with Woodhouse. However, Dossey goes on to take issue with Woodhouse's "inside/outside" distinction as a solution to the mind-body relation. If taken literally, Dossey's criticism is valid: "Instead of grappling with the nature of the connection between energy and consciousness, we are now obliged to clarify the nature of the boundary between 'inside' and 'outside' . . ." But I suspect that Woodhouse uses the spatial concepts "inside/outside" metaphorically because like the rest of us he finds our language short on nonphysical metaphors (though, as we shall see, nonspatial metaphors are available).

It may be, of course, that Woodhouse has not carefully thought through the implications of this spatial metaphor, and how it leaves him open to just the sort of critique that Dossey levels. Dossey, I presume, is as much concerned with Woodhouse's claim that "consciousness is energy," meaning it is the "inside" of energy, as he is about the difficulties in taking the spatial metaphor of "inside/outside" literally. On the first point, I share Dossey's concern. I am less concerned about the second. As long as we remember that talk of "interiority" and "exteriority" are metaphors, I believe they can be very useful ways of pointing toward a crucial distinction between consciousness and energy.

The metaphor becomes a problem if we slip into thinking that it points to a literal distinction between two kinds of "stuff" (as Descartes did), or indeed to a distinction revealing two aspects of a single kind of "stuff." This latter slip seems to be precisely the mistake that Woodhouse makes with his energy monism. By claiming that consciousness is energy, Woodhouse in effect-despite his best intentions to the contrary-succeeds in equating (and this means "reducing") consciousness to a physical "stuff." His mistake-and one that Dossey may be buying into-is to use "stuff-talk" for consciousness. It is a logical error to conclude from (1) there is only one kind of fundamental "stuff" (call it energy), and (2) this "stuff" has an interiority (call it consciousness), that (3) the interiority is also composed of that same "stuff"-i.e. that consciousness is energy. It could be that "interiority/consciousness" is not "stuff" at all, but something altogether distinct ontologically-for example, feeling or process-something which is intrinsic to, and therefore inseparable from, the "stuff." It could be that the world is made up of stuff that feels, where there is an ontologically distinction between the feeling (subjectivity, experience, consciousness) and what is felt (objectivity, matter-energy).

Dossey's rejection of the "inside/outside" metaphor seems to presume (à la Woodhouse) that "inside" means the interior of some "stuff" and is that "stuff"-in this case, energy-stuff. But that is not the position of panpsychist and process philosophers from Leibniz down through Bergson, James, and Whitehead, to Hartshorns and Griffin. If we make the switch from a "stuff-oriented" to a process-oriented ontology, then the kind of distinction between consciousness and energy dimly implicit in Woodhouse's model avoids the kind of criticism that Dossey levels at the "inside/outside" metaphor. Process philosophers prefer to use "time-talk" over "space-talk." Instead of talking about consciousness in terms of "insides," they talk about "moments of experience" or "duration." Thus, if we view the relationship between consciousness and energy in terms of temporal processes rather than spatial stuff, we can arrive at an ontology similar to Whitehead's where the relationship between consciousness and energy is understood as temporal. It is the relationship between subjectivity and objectivity, where the subject is the present state of an experiential process, and the object is its prior state. Substitute "present" for "interior" and "past" or "prior" for "exterior" and we have a process ontology which avoids the "boundary" difficulties raised by Dossey. (There is no boundary between past and present-the one flows into the other; the present incorporates the past.) From the perspective of panpsychism or radical materialism, consciousness and energy, mind and matter, subject and object always go together. All matter-energy is intrinsically sentient and experiential. Sentience-consciousness and matter-energy are inseparable, but nevertheless distinct. On this view, consciousness is the process of matter-energy informing itself.

Although our language is biased toward physics-energy talk, full of mechanistic metaphors, this is clearly not the whole story. The vernacular of the marketplace, as well as the language of science itself, is also rich with non-mechanistic metaphors, metaphors which flow direct from experience itself. Ironically, not only do we apply these consciousness metaphors to the mind and mental events, but also to the world of matter in our attempts to understand its deeper complexities and dynamics. For example, systems theory and evolutionary biology-even at the reductionist level of molecular genetics-are replete with words such as "codes," "information," "meaning," "self-organizing," and the p-word: "purpose." So we are not limited to mechanistic metaphors when describing either the world of matter or the world of mind. But-and this is the important point-because of our bias toward visuo-muscular images, we tend to forget that metaphors of the mind are sui generis, and, because of our scientific and philosophical bias in favor of mechanism, we often attempt to reduce metaphors of the mind to metaphors of matter. My proposal for consciousness talk is this: Recognize the limitations of mechanistic metaphors, and the inappropriateness of literal energy talk, when discussing consciousness. Instead, acknowledge the richness and appropriateness of metaphors of meaning when talking about the mind. In short: Drop mechanistic metaphors (energy talk) and take up meaning metaphors (consciousness talk) when talking about consciousness.

One of the thorniest issues in "energy" and "consciousness" work is the tendency to confuse the two. Consciousness does not equal energy, yet the two are inseparable. Consciousness is the "witness" which experiences the flow of energy, but it is not the flow of energy. We might say consciousness is the felt interiority of energy/matter-but it is not energy.

If we say that consciousness is a form of energy, then we have two options. Either It is a physical form of energy (even if it is very, very subtle energy), or It is not a physical form of energy. If we say that consciousness is a form of energy that is physical, then we are reducing consciousness (and spirit) to physics. And few of us, unless we are materialists, want to do that. If we say that consciousness is a form of energy that is not physical, then we need to say in what way psychic energy differs from physical energy. If we cannot explain what we mean by "psychic energy" and how it differs from physical energy, then we should ask ourselves why use the term "energy" at all? Our third alternative is to say that consciousness is not a form of energy (physical or nonphysical). This is not to imply that consciousness has nothing to do with energy. In fact, the position I emphasize in my graduate classes is that consciousness and energy always go together. They cannot ever be separated. But this is not to say they are not distinct. They are distinct-energy is energy, consciousness is consciousness-but they are inseparable (like two sides of a coin, or, better, like the shape and substance of a tennis ball. You can't separate the shape from the substance of the ball, but shape and substance are definitely distinct).

So, for example, if someone has a kundalini experience, they may feel a rush of energy up the chakra system . . . but to say that the energy flow is consciousness is to mistake the object (energy flow) for the subject, for what perceives (consciousness) the object. Note the two importantly distinct words in the phrase "feel the rush of energy . . . " On the one hand there is the "feeling" (or the "feeler"), on the other, there is what is being felt or experienced (the energy). Even our way of talking about it reveals that we detect a distinction between feeling (consciousness) and what we feel (energy). Yes, the two go together, but they are not the same. Unity, or unification, or holism, does not equal identity. To say that one aspect of reality (say, consciousness) cannot be separated from another aspect of reality (say, matter-energy) is not to say both aspects of reality (consciousness and matter-energy) are identical.

Consciousness, is neither identical to energy (monism) nor is it a separate substance or energy in addition to physical matter or energy (dualism)-it is the "interiority," the what-it-feels-like-from-within, the subjectivity that is intrinsic to the reality of all matter and energy (panpsychism or radical materialism). If you take a moment to pay attention to what's going on in your own body right now, you'll see-or feel-what I mean: The physical matter of your body, including the flow of whatever energies are pulsing through you, are the "stuff" of your organism. But there is also a part of you that is aware of, or feels, the pumping of your blood (and other energy streams). That aspect of you that feels the matter-energy in your body is your consciousness. We could express it this way: "Consciousness is the process of matter-energy informing itself." Consciousness is the ability that matter-energy has to feel, to know, and to direct itself. The universe could be (and probably is) full of energy flows, vortices, and vibrations, but without consciousness, all this activity would be completely unfelt and unknown. Only because there is consciousness can the flow of energy be felt, known, and purposefully directed.

Over the past three decades, philosophy of science has grown increasingly "local." Concerns have switched from general features of scientific practice to concepts, issues, and puzzles specific to particular disciplines. Philosophy of neuroscience is a natural result. This emerging area was also spurred by remarkable recent growth in the neuroscience. Cognitive and computational neuroscience continues to encroach upon issues traditionally addressed within the humanities, including the nature of consciousness, action, knowledge, and normativity. Empirical discoveries about brain structure and function suggest ways that "naturalistic" programs might develop in detail, beyond the abstract philosophical considerations in their favour

The literature distinguishes "philosophy of neuroscience" and "neurophilosophy." The former concerns foundational issues within the neuroscience. The latter concerns application of neuroscientific concepts to traditional philosophical questions. Exploring various concepts of representation employed in neuroscientific theories is an example of the former. Examining implications of neurological syndromes for the concept of a unified self is an example of the latter. In this entry, we will assume this distinction and discuss examples of both.

Contrary to some opinion, actual neuroscientific discoveries have exerted little influence on the details of materialist philosophies of mind. The "neuroscientific milieu" of the past four decades has made it harder for philosophers to adopt dualism. But even the "type-type" or "central state" identity theories that rose to brief prominence in the late 1950s drew upon few actual details of the emerging neuroscience. Recall the favourite early example of a psychoneural identity claim: pain is identical to C-fibre firing. The "C fibres" turned out to be related to only a single aspect of pain transmission. Early identity theorists did not emphasize psychoneural identity hypotheses, admitting that their "neuro" terms were placeholders for concepts from future neuroscience. Their arguments and motivations were philosophical, even if the ultimate justification of the program was held to be empirical.

The apology for this lacuna by early identity theorists was that neuroscience at that time was too nascent to provide any plausible identities. But potential identities were afoot. David Hubel and Torsten Wiesel's (1962) electrophysiological demonstrations of the receptive field properties of visual neurons had been reported with great fanfare. Using their techniques, neurophysiologists began discovering neurons throughout visual cortex responsive to increasingly abstract features of visual stimuli: from edges to motion direction to colours to properties of faces and hands. More notably, Donald Hebb had published The Organization of Behaviour (1949) a decade earlier. Therein he offered detailed explanations of psychological phenomena in terms of known neural mechanisms and anatomical circuits. His psychological explananda included features of perception, learning, memory, and even emotional disorders. He offered these explanations as potential identities. One philosopher did take note of some available neuroscientific detail was Barbara Von Eckardt-Klein (1975). She discussed the identity theory with respect to sensations of touch and pressure, and incorporated then-current hypotheses about neural coding of sensation modality, intensity, duration, and location as theorized by Mountcastle, Libet, and Jasper. Yet she was a glaring exception. By and large, available neuroscience at the time was ignored by both philosophical friends and foes of early identity theories.

Philosophical indifference to neuroscientific detail became "principled" with the rise and prominence of functionalism in the 1970s. The functionalists' favourite argument was based on multiple reliability: a given mental state or event can be realized in a wide variety of physical types (Putnam, 1967; Fodor, 1974). So a detailed understanding of one type of realizing physical system (e.g., brains) will not shed light on the fundamental nature of mind. A psychological state-type is autonomous from any single type of its possible realizing physical mechanisms. (See the entry on "Multiple Reliability" in this Encyclopedia, linked below.) Instead of neuroscience, scientifically-minded philosophers influenced by functionalism sought evidence and inspiration from cognitive psychology and "program-writing" artificial intelligence. These disciplines abstract away from underlying physical mechanisms and emphasize the "information-bearing" properties and capacities of representations (Haugeland, 1985). At this same time neuroscience was delving directly into cognition, especially learning and memory. For example, Eric Kandel (1976) proposed parasynaptic mechanisms governing transmitter release rate as a cell-biological explanation of simple forms of associative learning. With Robert Hawkins (1984) he demonstrated how cognitivist aspects of associative learning (e.g., blocking, second-order conditioning, overshadowing) could be explained cell-biologically by sequences and combinations of these basic forms implemented in higher neural anatomies. Working on the postsynaptic side, neuroscientists began unraveling the cellular mechanisms of long term potentiation (LTP). Physiological psychologists quickly noted its explanatory potential for various forms of learning and memory. Yet few "materialist" philosophers paid any attention. Why should they? Most were convinced functionalists. They believed that the "engineering level" details might be important to the clinician, but were irrelevant to the theorist of mind.

A major turning point in philosophers' interest in neuroscience came with the publication of Patricia Churchland's Neurophilosophy (1986). The Churchlands (Pat and husband Paul) were already notorious for advocating eliminative materialism. In her (1986) book, Churchland distilled eliminativist arguments of the past decade, unified the pieces of the philosophy of science underlying them, and sandwiched the philosophy between a five-chapter introduction and neuroscience and a 70-page chapter on three then-current theories of brain function. She was unapologetic about her intent. She was introducing philosophy of science to neuroscientists and neuroscience to philosophers. Nothing could be more obvious, she insisted, than the relevance of empirical facts about how the brain works to concerns in the philosophy of mind. Her term for this interdisciplinary method was "co-evolution" (borrowed from biology). This method seeks resources and ideas from anywhere on the theory hierarchy above or below the question at issue. Standing on the shoulders of philosophers like Quine and Sellars, Churchland insisted that specifying some point where neuroscience ends and philosophy of science begins is hopeless because the boundaries are poorly defined. Neurophilosophers would pick and choose resources from both disciplines as they saw fit.

Three themes predominate Churchland's philosophical discussion: developing an alternative to the logical empiricist theory of intertheoretic reduction; responding to property-dualistic arguments based on subjectivity and sensory qualia; and responding to anti-reductionist multiple reliability arguments. These projects have remained central to neurophilosophy over the past decade. John Bickle (1998) extends the principal insight of Clifford Hooker's (1981) post-empiricist theory of intertheoretic reduction. He quantifies key notions using a model-theoretic account of theory structure adapted from the structuralist program in philosophy of science. He also makes explicit the form of argument scientists employ to draw ontological conclusions (cross-theoretic identities, revisions, or eliminations) based on the nature of the intertheoretic reduction relations obtaining in specific cases. For example, physicists concluded that visible light, a theoretical posit of optics, is electromagnetic radiation within specified wavelengths, a theoretical posit of electromagnetism: a cross-theoretic ontological identity. In another case, however, chemists concluded that phlogiston did not exist: an elimination of a kind from our scientific ontology. Bickle explicates the nature of the reduction relation in a specific case using a semi-formal account of ‘intertheoretic approximation’ inspired by structuralist results. Paul Churchland (1996) has carried on the attack on property-dualistic arguments for the ir reducibility of conscious experience and sensory qualia. He argues that acquiring some knowledge of existing sensory neuroscience increases one's ability to ‘imagine’ or ‘conceive of’ a comprehensive neurobiological explanation of consciousness. He defends this conclusion using a thought-experiment based on the history of optics and electromagnetism. Finally, the literature critical of the multiple reliability argument has begun to flourish. Although the multiple reliability argument remains influential among nonreductive physicalists, it no longer commands the universal acceptance it once did. Replies to the multiple reliability argument based on neuroscientific details have appeared. For example, William Bechtel and Jennifer Mundale (1997, in press) argue that neuroscientists use psychological criteria in brain mapping studies. This fact undercuts the likelihood that psychological kinds are multiply realized.

Eliminative materialism (EM) is the conjunction of two claims. First, our common sense ‘belief-desire’ conception of mental events and processes, our ‘folk psychology,’ is a false and misleading account of the causes of human behaviour. Second, like other false conceptual frameworks from both folk theory and the history of science, it will be replaced by, rather than smoothly reduced or incorporated into, a future neuroscience. Folk psychology is the collection of common homilies about the causes of human behaviour. You ask me why Marica is not accompanying me this evening. I reply that her grant deadline is looming. You nod sympathetically. You understand my explanation because you share with me a generalization that relates beliefs about looming deadlines, desires about meeting professionally and financially significant ones, and ensuing free-time behaviour. It is the collection of these kinds of homilies that EM claims to be flawed beyond significant revision. Although this example involves only beliefs and desires, folk psychology contains an extensive repertoire of propositional attitudes in its explanatory nexus: hopes, intentions, fears, imaginings, and more. To the extent that scientific psychology (and neuroscience) retains folk concepts, EM applies to it as well.

EM is physicalist in the classical sense, postulating some future brain science as the ultimately correct account of (human) behaviour. It is eliminative in predicting the future removal of folk psychological kinds from our post-neuroscientific ontology. EM proponents often employ scientific analogies. Oxidative reactions as characterized within elemental chemistry bear no resemblance to phlogiston release. Even the "direction" of the two processes differ. Oxygen is gained when an object burns (or rusts), phlogiston was said to be lost. The result of this theoretical change was the elimination of phlogiston from our scientific ontology. There is no such thing. For the same reasons, according to EM, continuing development in neuroscience will reveal that there are no such things as beliefs and desires as characterized by common sense.

Here we focus only on the way that neuroscientific results have shaped the arguments for EM. Surprisingly, only one argument has been strongly influenced. (Most arguments for EM stress the failures of folk psychology as an explanatory theory of behaviour.) This argument is based on a development in cognitive and computational neuroscience that might provide a genuine alternative to the representations and computations implicit in folk psychological generalizations. Many eliminative materialists assume that folk psychology is committed to propositional representations and computations over their contents that mimic logical inferences. Even though discovering such an alternative has been an eliminativist goal for some time, neuroscience only began delivering on this goal over the past fifteen years. Points in and trajectories through vector spaces, as an interpretation of synaptic events and neural activity patterns in biological neural networks are key features of this new development. This argument for EM hinges on the differences between these notions of cognitive representation and the propositional attitudes of folk psychology (Churchland, 1987). However, this argument will be opaque to those with no background in contemporary cognitive and computational neuroscience, so we need to present a few scientific details. With these details in place, we will return to this argument for EM.

At one level of analysis the basic computational element of a neural network (biological or artificial) is the neuron. This analysis treats neurons as simple computational devices, transforming inputs into output. Both neuronal inputs and outputs reflect biological variables. For the remainder of this discussion, we will assume that neuronal inputs are frequencies of action potentials (neuronal "spikes") in the axons whose terminal branches synapse onto the neuron in question. Neuronal output is the frequency of action potentials in the axon of the neuron in question. A neuron computes its total input (usually treated mathematically as the sum of the products of the signal strength along each input line times the synaptic weight on that line). It then computes a new activation state based on its total input and current activation state, and a new output state based on its new activation value. The neuron's output state is transmitted as a signal strength to whatever neurons its axon synapses on. The output state reflects systematically the neuron's new activation state.

Analyzed at this level, both biological and artificial neural networks are interpreted naturally as vector-to-vector transformers. The input vector consists of values reflecting activity patterns in axons synapsing on the network's neurons from outside (e.g., from sensory transducers or other neural networks). The output vector consists of values reflecting the activity patterns generated in the network's neurons that project beyond the net (e.g., to motor effectors or other neural networks). Given that neurons' activity depends partly upon their total input, and total input depends partly on synaptic weights (e.g., parasynaptic neurotransmitter release rate, number and efficacy of postsynaptic receptors, availability of enzymes in synaptic cleft), the capacity of biological networks to change their synaptic weights make them plastic vector-to-vector transformers. In principle, a biological network with plastic synapses can come to implement any vector-to-vector transformation that its composition permits (number of input units, output units, processing layers, recurrence, cross-connections, etc.)

The anatomical organization of the cerebellum provides a clear example of a network amendable to this computational interpretation. The cerebellum is the bulbous convoluted structure dorsal to the brainstem. A variety of studies (behavioural, neuropsychological, single-cell electrophysiological) implicate this structure in motor integration and fine motor coordination. Mossy fibres (axons) from neurons outside the cerebellum synapse on cerebellular granule cells, which in turn project to parallel fibres. Activity patterns across the collection of mossy fibres (frequency of action potentials per time unit in each fibre projecting into the cerebellum) provide values for the input vector. Parallel fibres make multiple synapses on the dendritic trees and cell bodies of cerebellular Purkinje neurons. Each Purkinje neuron "sums" its post-synaptic potentials (PSPs) and emits a train of action potentials down its axon based (partly) on its total input and previous activation state. Purkinje axons project outside the cerebellum. The network's output vector is thus the ordered values representing the pattern of activity generated in each Purkinje axon. Changes to the efficacy of individual synapses on the parallel fibres and the Purkinje neurons alter the resulting PSPs in Purkinje axons, generating different axonal spiking frequencies. Computationally, this amounts to a different output vector to the same input activity pattern (plasticity).

This interpretation puts the useful mathematical resources of dynamical systems into the hands of computational neuroscientists. Vector spaces are an example. For example, learning can be characterized fruitfully in terms of changes in synaptic weights in the network and subsequent reduction of error in network output. (This approach goes back to Hebb, 1949, although within the vector-space interpretation that follows.) A useful representation of this account is on a synaptic weight-error space, where one dimension represents the global error in the network's output to a given task, and all other dimensions represent the weight values of individual synapses in the network. Points in this multi-dimensional state space represent the global performance error correlated with each possible collection of synaptic weights in the network. As the weights change with each performance (in accordance with a biologically-implemented learning algorithm), the global error of network performance continually decreases. Learning is represented as synaptic weight changes correlated with a descent along the error dimension in the space (Churchland and Sejnowski, 1992). Representations (concepts) can be portrayed as partitions in multi-dimensional vector spaces. An example is a neuron activation vector space. A graph of such a space contains one dimension for the activation value of each neuron in the network (or some subset). A point in this space represents one possible pattern of activity in all neurons in the network. Activity patterns generated by input vectors that the network has learned to group together will cluster around a (hyper-) point or sub volume in the activity vector space. Any input pattern sufficiently similar to this group will produce an activity pattern lying in geometrical proximity to this point or sub volume. Paul Churchland (1989) has argued that this interpretation of network activity provides a quantitative, neurally-inspired basis for prototype theories of concepts developed recently in cognitive psychology.

Using this theoretical development, has offered a novel argument for EM. According to this approach, activity vectors are the central kind of representation and vector-to-vector transformations are the central kind of computation in the brain. This contrasts sharply with the propositional representations and logical/semantic computations postulated by folk psychology. Vectorial content is unfamiliar and alien to common sense. This cross-theoretic difference is at least as great as that between oxidative and phlogiston concepts, or kinetic-corpuscular and caloric fluid heat concepts. Phlogiston and caloric fluid are two "parade" examples of kinds eliminated from our scientific ontology due to the nature of the intertheoretic relation obtaining between the theories with which they are affiliated and the theories that replaced these. The structural and dynamic differences between the folk psychological and emerging cognitive neuroscientific kinds suggest that the theories affiliated with the latter will also correct significantly the theory affiliated with the former. This is the key premise of an eliminativist argument based on predicted intertheoretic relations. And these intertheoretic contrasts are no longer just an eliminativist's goal. Computational and cognitive neuroscience has begun to deliver an alternative kinematics for cognition, one that provides no structural analogue for the propositional attitudes.

Certainly the replacement of propositional contents by vectorial alternatives implies significant correction to folk psychology. But does it justify EM? Even though this central feature of folk-psychological posits finds no analogues in one hot theoretical development in recent cognitive and computational neuroscience, there might be other aspects of cognition that folk psychology gets right. Within neurophilosophy, concluding that a cross-theoretic identity claim is true (e.g., folk psychological state F is identical to neural state N) or that an eliminativist claim is true (there is no such thing as folk psychological state F) depends on the nature of the intertheoretic reduction obtaining between the theories affiliated with the posits in question. But the underlying account of intertheoretic reduction recognizes a spectrum of possible reductions, ranging from relatively "smooth" through "significantly revisionary" to "extremely bumpy." Might the reduction of folk psychology and a "vectorial" neurobiology occupy the middle ground between "smooth" and "bumpy" intertheoretic reductions, and hence suggest a "revisionary" conclusion? The reduction of classical equilibrium thermodynamics to statistical mechanics to microphysics provides a potential analogy. John Bickle argues on empirical grounds that such a outcome is likely. He specifies conditions on "revisionary" reductions from historical examples and suggests that these conditions are obtaining between folk psychology and cognitive neuroscience as the latter develops. In particular, folk psychology appears to have gotten right the grossly-specified functional profile of many cognitive states, especially those closely related to sensory input and behavioural output. It also appears to get right the "intentionality" of many cognitive states - the object that the state is of or about - even though cognitive neuroscience eschews its implicit linguistic explanation of this feature. Revisionary physicalism predicts significant conceptual change to folk psychological concepts, but denies total elimination of the caloric fluid-phlogiston variety.

The philosophy of science is another area where vector space interpretations of neural network activity patterns has impacted philosophy. In the Introduction to his (1989) book, Paul Churchland asserts that it will soon be impossible to do serious work in the philosophy of science without drawing on empirical work in the brain and behavioural sciences. To justify this claim, he suggests neurocomputational reformulation of key concepts from this area. At the heart is a neurocomputational account of the structure of scientific theories (1989, chapter 9). Problems with the orthodox "sets-of-sentences" view have been well-known for over three decades. Churchland advocates replacing the orthodox view with one inspired by the "vectorial" interpretation of neural network activity. Representations implemented in neural networks (as discussed above) compose a system that corresponds to important distinctions in the external environment, are not explicitly represented as such within the input corpus, and allow the trained network to respond to inputs in a fashion that continually reduces error. These are exactly the functions of theories. Churchland is bold in his assertion: an individual's theory-of-the-world is a specific point in that individual's error-synaptic weight vector space. It is a configuration of synaptic weights that partitions the individual's activation vector space into subdivisions that reduce future error messages to both familiar and novel inputs.

This reformulation invites an objection, however. Churchland boasts that his theory of theories is preferable to existing alternatives to the orthodox "sets-of-sentences" account - for example, the semantic view (Suppe, 1974; van Fraassen, 1980) - because his is closer to the "buzzing brains" that use theories. But as Bickle notes, neurocomputational models based on the mathematical resources described above are a long way into the realm of abstractia. Even now, they remain little more than novel (and suggestive) applications of the mathematics of quasi-linear dynamical systems to simplified schemata of brain circuitries. Neurophilosophers owe some account of identifications across ontological categories before the philosophy of science community will accept the claim that theories are points in high-dimensional state spaces implemented in biological neural networks. (There is an important methodological assumption lurking in this objection.

Churchland's neurocomputational reformulation of scientific and epistemological concepts build on this account of theories. He sketches "neutralized" accounts of the theory-ladenness of perception, the nature of concept unification, the virtues of theoretical simplicity, the nature of Kuhnian paradigms, the kinematics of conceptual change, the character of abduction, the nature of explanation, and even moral knowledge and epistemological normativity. Conceptual redeployment, for example, is the activation of an already-existing prototype representation - centerpoint or region of a partition of a high-dimensional vector space in a trained neural network - a novel type of input pattern. Obviously, we can't here do justice to Churchland's many and varied attempts at reformulation. We urge the intrigued reader to examine his suggestions in their original form. But a word about philosophical methodology is in order. Churchland is not attempting "conceptual analysis" in anything resembling its traditional philosophical sense and neither, typically, are Neurophilosophers. (This is why a discussion of neurophilosophical reformulation fits with a discussion of EM.) There are philosophers who take the discipline's ideal to be a relatively simple set of necessary and sufficient conditions, expressed in non-technical natural language, governing the application of important concepts (like justice, knowledge, theory, or explanation). These analyses should square, to the extent possible, with pre-theoretical usage. Ideally, they should preserve synonymy. Other philosophers view this ideal as sterile, misguided, and perhaps deeply mistaken about the underlying structure of human knowledge. Neurophilosophers tend to reside in the latter camp. Those who dislike philosophical speculation about the promise and potential of nascent science in an effort to reformulate ("reform-ulate") traditional philosophical concepts have probably already discovered that neurophilosophy is not for them. But the charge that neurocomputational reformulation of the sort Churchland attempts are "philosophically uninteresting" or "irrelevant" because they fail to provide "adequate analyses" of theory, explanation, and the like will fall on deaf ears among many contemporary philosophers, as well as their cognitive-scientific and neuroscientific friends. Before we leave the neurophilosophical applications of this theoretical development from recent cognitive/computational neuroscience, one more point of scientific detail is in order. The popularity of treating the neuron as the basic computational unit among neural modelers, as opposed to cognitive modelers, is declining rapidly. Compartmental modeling enables computational neuroscientists to mimic activity in and interactions between patches of neuronal membrane. This permits modelers to control and manipulate a variety of subcellular factors that determine action potentials per time unit (including the topology of membrane structure in individual neurons, variations in ion channels across membrane patches, field properties of post-synaptic potentials depending on the location of the synapse on the dendrite or soma). Modelers can "custom build" the neurons in their target circuitry without sacrificing the ability to study circuit properties of networks. For these reasons, few serious computational neuroscientists continue to work at a level that treats neurons as unstructured computational devices. But the above interpretative points still stand. With compartmental modeling, not only are simulated neural networks interpretable as vector-to-vector transformers. The neurons composing them are, too.

Philosophy of science and scientific epistemology are not the only areas where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a "traditional" view of the senses underlies the variety of sophisticated "naturalistic" programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are "veridical" in at least three ways. (1) Each signal in the system correlates with a small range of properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states. And (3) the sensory system reconstructs faithfully, without fictive additions or embellishments, the external events. Using recent neurobiological discoveries about response properties of thermal receptors in the skin as an illustration, Akins shows that sensory systems are "narcissistic" rather than "veridical." All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our "philosophy of perception" or of "perceptual intentionality" will no longer focus on the search for correlations between states of sensory systems and "veridically detected" external properties. This traditional philosophical (and scientific!) project rests upon a mistaken "veridical" view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a "simple paradigm case" of an intentional relation between representation and world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.

Focussing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers' favourite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists views, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experiences are the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tend to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pain sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analyzing the general type.

Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. We'll here describe a few contributions Neurophilosophers have made to this literature.

When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or "aboutness." The percept or memory is about one's being out of coffee; it represents one as being out of coffee. The representational state has content. A psychosemantics seeks to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. A physicalist psychosemantics seeks to do this using resources of the physical sciences exclusively. Neurophilosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.

The core claim of a functional role semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive ‘and’ or disjunctive ‘or.’ A physical event instantiates the ‘and’ function just in case it maps two true inputs onto a single true output. Thus it is the relations an expression bears to others that give it the semantic content of ‘and.’ Proponents of functional role semantics propose similar analyses for the content of all representations (Block 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics ascribe content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of ‘function.’ A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for a psychosemantics, but are the ones to which Neurophilosophers have contributed.

Paul Churchland's allegiance to functional role semantics goes back to his earliest views about the semantics of terms in a language. In his (1979) book, he insists that the semantic identity (content) of a term derives from its place in the network of sentences of the entire language. The functional economies envisioned by early functional role semanticists were networks with nodes corresponding to the objects and properties denoted by expressions in a language. Thus one node, appropriately connected, might represent birds, another feathers, and another beaks. Activation of one of these would tend to spread to the others. As ‘connectionist’ network modeling developed, alternatives arose to this one-representation-per-node ‘localist’ approach. By the time Churchland provided a neuroscientific elaboration of functional role semantics for cognitive representations generally, he too had abandoned the ‘localist’ interpretation. Instead, he offered a ‘state-space semantics’.

We saw in the section just above how (vector) state spaces provide a natural interpretation for activity patterns in neural networks (biological and artificial). A state-space semantics for cognitive representations is a species of a functional role semantics because the individuation of a particular state depends upon the relations obtaining between it and other states. A representation is a point in an appropriate state space, and points (or sub volumes) in a space are individuated by their relations to other points (locations, geometrical proximity). Churchland illustrates a state-space semantics for neural states by appealing to sensory systems. One popular theory in sensory neuroscience of how the brain codes for sensory qualities (like Collor) is the opponent process account. Churchland describes a three-dimensional activation vector state-space in which every Collor perceivable by humans is represented as a point (or subvalue). Each dimension corresponds to activity rates in one of three classes of photoreceptors present in the human retina and their efferent paths: the red-green opponent pathway, yellow-blue opponent pathway, and black-white (contrast) opponent pathway. Photons striking the retina are transduced by the receptors, producing an activity rate in each of the segregated pathways. A represented Collor is hence a triplet of activation frequency rates. Each dimension in that three-dimensional space will represent average frequency of action potentials in the axons of one class of ganglion cells projecting out of the retina. Each Collor perceivable by humans will be a region of that space. For example, an orange stimulus produces a relatively low level of activity in both the red-green and yellow-blue opponent pathways (x-axis and y-axis, respectively), and middle-range activity in the black-white (contrast) opponent pathways (z-axis). Pink stimuli, on the other hand, produce low activity in the red-green opponent pathway, middle-range activity in the yellow-blue opponent pathway, and high activity in the black-white (contrast) an opponent pathway. The location of each colour in the space generates a ‘colour solid.’ Location on the solid and geometrical proximity between regions reflect structural similarities between the perceived colours. Human gustatory representations are points in a four-dimensional state space, with each dimension coding for activity rates generated by gustatory stimuli in each type of taste receptor (sweet, salty, sour, bitter) and their segregated efferent pathways. When implemented in a neural network with structural and hence computational resources as vast as the human brain, the state space approach to psychosemantics generates a theory of content for a huge number of cognitive states.

Jerry Fodor and Ernest LePore raise an important challenge to Churchland's psychosemantics. Location in a state space alone seems insufficient to fix a state's representational content. Churchland never explains why a point in a three-dimensional state space represents a Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchland's account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the ‘external’ inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. However, this appeal to ‘external’ stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?

The neurobiological paradigm for informational semantics is the feature detector: one or more neurons that are (i) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favourite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells' activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel's (1962) Nobel Prize winning work establishing the receptive fields of neurons in striate cortex is often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.

Kathleen Akins (1996) offers a different neurophilosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on ‘veridical’ representation. From this fact she draws doubts about looking for feature detecting neurons to ground a psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago need not, and perhaps cannot, figure into the frog's feature detection repertoire. Akins' critique casts doubt on whether details of sensory transduction will scale up to provide an adequate unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in "narcissistic" sensory receptors, keyed not to "objective" environmental features but rather only to effects of the stimuli on the patch of tissue innervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the "fly-thought" example presented above reveals), and the rest? And how did the development of a stable, rich ontology confer survival advantages to human ancestors?

Consciousness has reemerged as a topic in philosophy of mind and the cognitive and brain sciences over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways that neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993a) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.

More recently philosopher David Chalmers (1996) has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanism. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offers a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience--and a literature is beginning to emerge (e.g., Gazzaniga, 1995) - the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just bare assertion. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")

A second focus of skeptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favourite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiologically, while the Collor that fire engines and tomatoes appear to have to one subject is the Collor that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective properties existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: for example, that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.

We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes tension between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favors scientific investigations over conceivability-based philosophical arguments.

Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Block (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is the ‘what it is like’-ness of experience. The latter is the availability of representational content to self-initiated action and speech. Block argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are skeptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Block's distinction.

Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neurophilosophical attention has self-consciousness. the first issues to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies post mortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates Broca postulated for the ‘speech production centre" do not correlate exactly with damage producing production deficits, both this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research

Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to block or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacities c1, c2, . . . cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacities c1, c2, . . ., cn). A functional-localization hypothesis has the form: brain structure S in organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example might be: Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.

Armed with these characterizations, Von Eckardt argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour P (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca's area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.

Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments either argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicit the ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.

Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plauge lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: positron emission tomography, or PET, and functional magnetic resonance imaging, or fMRI. Although these measure different biological markers of functional activity, both now have a resolution down to around 1mm. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow

What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behaviour to network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology. It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.

However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortex. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.

We've already seen one example, the vector transformation account, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using ‘cognitivist’ resources are also being pursued. Many of these projects draw upon ‘cognitivist’ characterizations of the phenomena to be explained. Many exploit ‘cognitivist’ experimental techniques and methodologies. Some even attempt to derive ‘cognitivist’ explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuroscientists employ the ‘information processing’ view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the ‘synoptic vision’ afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been slow among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.

Defeated in two wars, Germany appeared to have invaded vast territories of the world’s mind, with Nietzsche himself as no mean conqueror. For his was the vision of things to come. Much, too much, would strike him as déjà vu: Yes, he had foreseen it, and he would understand, for the ‘Modern Mind’ speaks German, not always good German, but fluent German nonetheless, it was, only forced by learning the idiom of Karl Marx, and was delighted to be introduced to itself in the language of Sigmund Freud’ taught by Rank and later Max Weber, It acquired its historical and sociological self-consciousness, moved out of its tidy Newtonian universe on the instruction of Einstein, and followed a design of Oswald Spengler’s in sending, from the depth of its spiritual depression, most ingeniously engineered objects higher than the moon. Whether it discovers, with Heidegger, the true habitation of its Existenza on the frontier boundaries of Nothing, or mediates, with Sartre and Camus le Néant or the Absurd, whether - to pass to its less serous moods - it is nihilistically young and profitably angry in London or rebelliously debauched and Buddhistic in San Francisco - it is part of a story told by Nietzsche.

As for modern German literature and thought, it is hardly an exaggeration to say that they would not be what they are if Nietzsche had never lived. Name almost any poet, man of letters, philosopher, who wrote in German during the twentieth century and attained to stature and influence - Rilke, George, Kafka, Tomas Mann, Ernst Jünger, Musil, Benn, Heidegger, or Jaspers - and you name at the same time Friedrick Nietzsche. He is too, them all - whether or not they know and acknowledge it (most of them do) - what St. Thomas Aquinas was to Dante: The categorical interpreter of a world that they contemplate poetically or philosophically without ever radically upsetting its Nietzschean structure.

He was convinced that it would take at least fifty years before a few men would understand what he had accomplished. He feared that even then his teaching would be misinterpreted and misapplied. “I am terrified,” he wrote, “by the thought of the sort of people who may one day invoke my authority.” Yet is this not, he added, the anguish of every great teacher? Still, the conviction that he was a great teacher never left him after he had passed through that period of sustained inspiration in which he wrote the first part of Zarathustra. After this, all his utterances convey the disquieting self-confidence and the terror of a man who has reached the culmination of that paradox that he embodies, and whichever has since cast its dangerous spell over some of the finest and some of the coarsest minds.

Are we then, in a better position to probe Nietzsche’s mind and too avid, as he anticipated some might, the misunderstanding that he was merely concerned with religious, philosophical, or political controversies fashionable in his day? If this is a misinterpretation, can we put anything more valid in its place? What is the knowledge that he claims to have, raising him in his own opinion far above the contemporary level of thought? What the discovery that serves him as a lever to unhinge the whole fabric of traditional values?

It is the knowledge that God is dead.

The death of God he calls the greatest event in modern history and the cause of extreme danger. Its paradoxical place a value may be contained in these words. He never said that there was no God, but that the External had been vanquished by Time and that the immortal suffered death at the hands of mortals: “God is dead.” It is like a cry mingled of despair and triumph, reducing, by comparison, the whole story of atheism and agnosticism before and after him to the level of respectable mediocrity and making it sound like a collection of announcements by bankers who regret they are unable to invest in an unsafe proposition. Nietzsche, for the nineteenth century, brings to its perverse conclusion a line of religious thought and experience linked with the names of St. Paul, St. Augustine, Pascal, Kierkegaard, and Dostoevsky, minds for whom God has his clearly defined place, but to whom. He came in order to challenge their natural being, making demands that appeared absurd in the light of natural reason. These men are of the family of Jacob: Having wrestled with God for His blessing, they ever after limp through life with the framework of Nature incurably out of joint. Nietzsche too believed that he prevailed against God in that struggle, and won a new name for himself, the name of Zarathustra. However, the words he spoke on his mountain to the angel of the Lord? I will not let thee go, but thou curse me. Or, in words that Nietzsche did in fact speak: “I have on purpose devoted my life to exploring the whole contrast to a truly religious nature. I know the Devil and all his visions of God.

“God is dead” - this is the very core of Nietzsche’s spiritual existence, and what follows is despair and hope in a new greatness of man, visions of catastrophe and glory, the icy brilliance of analytical reason, fathoming with affected irreverence those depths through which are hidden of a ritual healer.

Perhaps by definition alone, comes the unswerving call of atheism, by this is the denial of or lack of belief in the existence of a god or gods. The term atheism comes from the Greek prefix ‘a-‘, meaning “without,” and the Greek word ‘theos’, meaning “deity.” The denial of gods’ existence is also known as strong, or positive, atheism, whereas the lack of belief in a god is known as negative, or weak, atheism. Although atheism is often contrasted with agnosticism - the view that we cannot know whether a deity exists or not and should therefore suspend belief - negative atheism is in fact compatible with agnosticism.

About one-third of the world’s population adheres to a form of Christianity. Latin America has the largest number of Christians, most of whom are Roman Catholics. Islam is practised by over one-fifth of the world’s population, most of whom live in parts of Asia, particularly the Middle East.

Atheism has wide-ranging implications for the human condition. In the rendering absence to belief in a god, as, too, ethical goals must be determined by secular and nonreligious aims of concern, human beings must take full responsibility for their destiny, and death marks the end of a person’s existence. As of 1994 there were an estimated 240 million atheists around the world comprising slightly more than 4 percent of the world’s population, including those who profess atheism, skepticism, disbelief, or irreligion. The estimate of nonbelievers increases significantly, to about 21 percent of the world’s population, if negative atheists are included.

From ancient times, people have at times used atheism as a term of abuse for religious positions they opposed. The first Christians were called atheists because they denied the existence of the Roman deities. Over time, several misunderstandings of atheism have arisen: That atheists are immoral, that morality cannot be justified without belief in God, and that life has no purpose without belief in God. Yet there is no evidence that atheists are any less moral than believers. Many systems of morality have been developed that do not presuppose the existence of a supernatural being. Moreover, the purpose of human life may be based on secular goals, such as the betterment of humankind.

In Western society the term atheism has been used more narrowly to refer to the denial of theism, in particular Judeo-Christian theism, which asserts the existence of an all-powerful, all-knowing, all-good personal being. This being created the universe, took an active interest in human concerns, and guides his creatures through divine disclosure known as revelation. Positive atheists reject this theistic God and the associated beliefs in an afterlife, a cosmic destiny, a supernatural origin of the universe, an immortal soul, the revealed nature of the Bible and the Qur'an (Koran), and a religious foundation for morality.

Theism, however, is not a characteristic of all religions. Some religions reject theism but are not entirely atheistic. Although the theistic tradition is fully developed in the Bhagavad-Gita, the sacred text of Hinduism, earlier Hindu writings known as the Upanishads teach that Brahman (ultimate reality) is impersonal. Positive atheists reject even the pantheistic aspects of Hinduism that equate God with the universe. Several other Eastern religions, including Theravada Buddhism and Jainism, are commonly believed to be atheistic, but this interpretation is not strictly correct. These religions do reject a theistic God believed to have created the universe, but they accept numerous lesser gods. At most, such religions are atheistic in the narrow sense of rejecting theism.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra 1883-1885, articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

In the Western intellectual world, nonbelief in the existence of God is a widespread phenomenon with a long and distinguished history. Philosophers of the ancient world such as Lucretius were nonbelievers. Even in the Middle Ages (5th to 15th centuries) there were currents of thought that questioned theist assumptions, including skepticism, the doctrine that true knowledge is impossible, and naturalism, the belief that only natural forces control the world. Several leading thinkers of the Enlightenment (1700-1789) were professed atheists, including Danish writer Baron Holbach and French encyclopedist Denis Diderot. Expressions of nonbelief also are found in classics of Western literature, including the writings of English poets Percy Shelley and Lord Byron, the English novelist Thomas Hardy, including French philosophers’ Voltaire and Jean-Paul Sartre, the Russian author Ivan Turgenev, and the American writers’ Mark Twain and Upton Sinclair. In the 19th century the most articulate and best-known atheists and critics of religion were German philosophers’ Ludwig Feuerbach, Karl Marx, Arthur Schopenhauer, and Friedrich Nietzsche. British philosopher Bertrand Russell, Austrian psychoanalyst Sigmund Freud, and Sartre are among the 20th century’s most influential atheists.

Nineteenth-century German philosopher Friedrich Nietzsche was an influential critic of religious systems, especially Christianity, for which he felt chained to the thickening herd morality. By declaring that “God is dead,” Nietzsche signified that traditional religious belief in God no longer played a central role in human experience. Nietzsche believed we would have to find secular justifications for morality to avoid nihilism - the absence of all belief.

Atheists justify their philosophical position in several different ways. Negative atheists attempt to establish their position by refuting typical theist arguments for the existence of God, such as the argument from first cause, the argument from design, the ontological argument, and the argument from religious experience. Other negative atheists assert that any statement about God is meaningless, because attributes such as all-knowing and all-powerful cannot be comprehended by the human mind. Positive atheists, on the other hand, defend their position by arguing that the concept of God is inconsistent. They question, for example, whether a God who is all-knowing can also be all-good and how a God who lacks bodily existence can be all-knowing.

Some positive atheists have maintained that the existence of evil makes the existence of God improbable. In particular, atheists assert that theism does not provide an adequate explanation for the existence of seemingly gratuitous evil, such as the suffering of innocent children. Theists commonly defend the existence of evil by claiming that God desires that human beings have the freedom to choose between good and evil, or that the purpose of evil is to build human character, such as the ability to persevere. Positive atheists counter that justifications for evil in terms of human free will leave unexplained why, for example, children suffer because of genetic diseases or abuse from adults. Arguments that God allows pain and suffering to build human character fail, in turn, to explain why there was suffering among animals before human beings evolved and why human character could not be developed with less suffering than occurs in the world. For atheists, a better explanation for the presence of evil in the world is that God does not exist.

Atheists have also criticized historical evidence used to support belief in the major theistic religions. For example, atheists have argued that a lack of evidence casts doubt on important doctrines of Christianity, such as the virgin birth and the resurrection of Jesus Christ. Because such events are said to represent miracles, atheists assert that extremely strong evidence is necessary to support their occurrence. According to atheists, the available evidence to support these alleged miracles - from Biblical, pagan, and Jewish sources - is weak, and therefore such claims should be rejected.

Atheism is primarily a reaction to, or a rejection of, religious belief, and thus does not determine other philosophical beliefs. Atheism has sometimes been associated with the philosophical ideas of materialism, which holds that only matter exists. Communism, which asserts that religion impedes human progress, and rationalism, which emphasizes analytic reasoning over other sources of knowledge. However, there is no necessary connection between atheism and these positions. Some atheists have opposed communism and some have rejected materialism. Although nearly all contemporary materialists are atheists, the ancient Greek materialist Epicurus believed the gods were made of matter in the form of atoms. Rationalists such as French philosopher René Descartes have believed in God, whereas atheists such as Sartre are not considered to be rationalists. Atheism has also been associated with systems of thought that reject authority, such as anarchism, a political theory opposed to all forms of government, and existentialism, a philosophic movement that emphasizes absolute human freedom of choice; there is however no necessary connection between atheism and these positions. British analytic philosopher A.J. Ayer was an atheist who opposed existentialism, while Danish philosopher Søren Kierkegaard was an existentialist who accepted God. Marx was an atheist who rejected anarchism while Russian novelist Leo Tolstoy, a Christian, embraced anarchism. Because atheism in a strict sense is merely a negation, it does not provide a comprehensive world-view. Presuming other philosophical positions to be outgrowths of atheism is therefore not possible.

Intellectual debate over the existence of God continues to be active, especially on college campuses, in religious discussion groups, and in electronic forums on the Internet. In contemporary philosophical thought, atheism has been defended by British philosopher Antony Flew, Australian philosopher John Mackie, and American philosopher Michael Martin, among others. Leading organizations of unbelief in the United States include The American Atheists, The Committee for the Scientific Study of Religion.

Friedrich Nietzsche (1844-1900), German philosopher, poet, and classical philologist, who was one of the most provocative and influential thinkers of the 19th century. Nietzsche founded his morality on what he saw as the most basic human drive, the will to power. Nietzsche criticized Christianity and other philosophers’ moral systems as “slave moralities” because, in his view, they chained all members of society with universal rules of ethics. Nietzsche offered, in contrast, a “master morality” that prized the creative influence of powerful individuals who transcended the common rules of society.

Nietzsche studied classical philology at the universities of Bonn and Leipzig and was appointed the professor of classical philology at the University of Basel at the age of 24. Ill health (he was plagued throughout his life by poor eyesight and migraine headaches) forced his retirement in 1879. Ten years later he suffered a mental breakdown from which he never recovered. He died in Weimar in 1900.

In addition to the influence of Greek culture, particularly the philosophies of Plato and Aristotle, Nietzsche was influenced by German philosopher Arthur Schopenhauer, by the theory of evolution, and by his friendship with German composer Richard Wagner.

Nietzsche’s first major work, Die Geburt der Tragödie aus dem Geiste de Musik (The Birth of Tragedy), appeared in 1872. His most prolific period as an author was the 1880s. During the decade he wrote, Also sprach Zarathustra (Parts one-3, 1883-1884; Part four-4, 1885, and translated to English as, Thus Spake Zarathustra), Jenseits von Gut und Böse, 1886, Beyond Good and Evil - Zur Genealogie de Moral, 1887, also, On the Genealogy of Morals, and the German, Der Antichrist 1888, the English translation, The Antichrist, and Ecce Homo, was completed 1888, and published 1908. Nietzsche’s last major work, The Will to Power, Der Wille zur Macht, was published in 1901.

One of Nietzsche’s fundamental contentions was that traditional value (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation “God is dead.” He was convinced that traditional values represented a “slave morality,” a morality created by weak and resentful individuals who encouraged such behaviour as gentleness and kindness because the behaviour served their interests. Nietzsche claimed that new values could be created to replace the traditional ones, and his discussion of the possibility led to his concept of the overman or superman.

According to Nietzsche, the masses (whom he termed the herd or mob) conform to tradition, whereas his ideal overman is secure, independent, and highly individualistic. The overman feels deeply, but his passions are rationally controlled. Concentrating on the real world, than on the rewards of the next world promised by religion, the overman affirms life, including the suffering and pain that accompany human existence. Nietzsche’s overman is a creator of values, a creator of its “master morality” that reflects the strength and independence of one who is liberated from all values, except those that he deems valid.

Nietzsche maintained that all human behaviour is motivated by the will to power. In its positive sense, the will to power is not simply power over others, but the power over one’s self that is necessary for creativity. Such power is manifested in the overman's independence, creativity, and originality. Although Nietzsche explicitly denied that any overmen had yet arisen, he mentions several individuals who could serve as models. Among these models he lists Jesus, Greek philosopher Socrates, Florentine thinker Leonardo da Vinci, Italian artist Michelangelo, English playwright William Shakespeare, German author Johann Wolfgang von Goethe, Roman ruler Julius Caesar, and French emperor Napoleon I.

The concept of the overman has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche's work.

An acclaimed poet, Nietzsche exerted much influence on German literature, as well as on French literature and theology. His concepts have been discussed and elaborated upon by such individuals as German philosophers Karl Jaspers and Martin Heidegger, and German Jewish philosopher Martin Buber, German American theologian Paul Tillich, and French writers’ Albert Camus and Jean-Paul Sartre. After World War II (1939-1945), American theologians’ Thomas J.J. Altizer and Paul Van Buren seized upon Nietzsche's proclamation “God is dead” in their attempt to make Christianity relevant to its believers in the 1960s and 1970s.

Nietzsche is openly pessimistic about the possibility of knowledge, for truth: we know (or, believe or imagine) just as much as may be useful in the interests of the human herd, the species: and even what is here called ‘utility’, is ultimately also a mere belief, something imaginary and perhaps precisely that most calamitous stupidity of which we shall perish some day.

This position is very radical. Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: he writes that we think we know what we think is useful, and that we can be quite wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if they were to constitute knowledge. He sums up this highly controversial position in The Will to Power: Facts are precisely what there is not, only interpretation.

It is often claimed that perspectivism is self-undermining, if the thesis that all views are interpretations is true then, it is argued, there is at least one view that is not an interpretation. If, on the other hand, the thesis is itself an interpretation, then there is no reason to believe that it is true, and it follows again, that not every view is an interpretation.

Nevertheless, this refutation assumes that if a view of perspectivism itself, is an interpretation that it is wrong. This is not the case, to call any view, including perspectivism. An interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is actually false producing another view superior to it on specific epistemological grounds is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, only by its attributes to specific approaches’ truth in relation to facts specified internally by the approaches themselves. Nonetheless, it refuses to envisage a single independent set of facts, to be accounted for by all theories. Thus Nietzsche grants the truth of specific scientific theories, he does, nevertheless, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’, neither the fact’s science addresses nor the methods it employs are privileged. Scientific theories serve the purpose for which they have been devised, but these have no priority over the many other purposes of human life.

The existence of many purposes and needs relative to which the value of theories is established - another crucial element of perspectivism - is sometimes thought to imply a lawless relativism. According to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of a single set of standards for determining epistemic value once and for all. However, he holds that specific views can be compared with and evaluated in relation to one another. The ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in all. Agreement is therefore, not always possible, since individuals may sometimes differ over the most fundamental issues dividing them.

Least of mention, Nietzsche would not be troubled by this fact, which his opponents too also have to confront only, as he would argue, to suppress it by insisting on the hope that all disagreements are in principal eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal. He considers irresoluble disagreement an essential parts of human life.

Since, scientists during the nineteenth century were preoccupied with uncovering the workings of external reality and virtually nothing was known about the physical substrate is of human consciousness, the business of examining the dynamics and structure of mind became the province of ‘social scientists’ and ‘humanists’. Adolphe Quételet proposed a social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanistic social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jeremy Bentham and John Stuart Mill, in the historical materialist of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James, and John Dewey. All these thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each was obligated to conclude that the realm of the mental exists only in the subjective reality of the individual.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely rational explanations of the division between subjective reality and external reality had limited appeal outside the community of intellectuals, the figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all previous philosophical attempts to articulate the ‘will to truth’. The problem, claimed Nietzsche, is that linear versions of the ‘will to truth’ disguise the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual ‘will’.

Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Nietzsche sought to reinforce his view of the subjective character of scientific knowledge and arithmetic that arose during the last three decades of the nineteenth century. Though a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics, to resolve this crisis results in a view of the character of human consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existenualist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, the deconstructionalists Jacques Lacan, Roland Barthes, Michel Foucault, and Jacques Derrida, this direct line found linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origins of philosophical postmodernism served to perpetuate the Cartesian two world dilemma, in, an even, or oppressive form.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for liking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternity’ is the guiding principles of this consciousness. Rousseau also made god-like the idea of the ‘general will’ of the people to achieve these goals and declared that those who do not conform to this will were social deviants.

The Enlightenment idea of deism, which imagined the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency lay the moment of creation. It also implied, however, that all the creative forces of the universe were exhausted at origins, that the physical substrates of mind were subject to the same natural laws as matter, and that the only means of mediating the gap between mind and matter was pure reason. Traditional Judeo-Christian theism, which had previously been based on both reason and revelation, responded to the challenge of deism by debasing rationality as a test of faith and embracing the idea that the truth of spiritual reality can be known only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. And it also laid the fundamental for the fierce competition between the mega-narratives of science and religion as frame tale s for mediating the character of each should be ultimately defined.

The most fundamental aspect of intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind and spirit. The metaphysical framework based on this assumption known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in the theology by Platonic and Neoplatonic philosophy.

The role of seventeenth-century metaphysics is also apparent in metaphysical presuppositions about matter described by classical enumerations of motion. These presuppositions can be briefly defined as follows: (1) The physical world is made up of inert and changeless matter, and this matter changed only in terms of location in space, (2) the behaviour of matter mirrors physical theory and is inherently mathematical, (3) matter as the unchanging unit of physical reality can be exhaustively understood by mechanics, or by the applied mathematics of motion, and (4) the mind of the observer is separate from the observed system of matter, and the ontological bridge between the two physical law and theory.

Once, again, these presuppositions have a metaphysical basis because they are required to assume the following, - that the full and certain truths about the physical world are revealed in a mathematical structure governed by physical laws, which have a prior or separate existence from this world. While Copernicus, Galileo, Kepler, Descartes, and Newton assumed that metaphysics or ontological foundation for these laws was the perfect mind of God, the idea was increasingly regarded, even in the eighteenth century, as somewhat unnecessary, what would endure in an increasingly disguised form was the assumption of ontological dualism. This assumption, which remains alive and well in the debates about scientific epistemology, allowed the truths of mathematical physics to be regarded as having a separate and immutable existence outside the world of change.

As this view of hypotheses and the truths of nature as qualities were extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, an magnetism, LaPlaces’ assumptions about the actual character of scientific truths seemed quite correct, this progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Combe, Kirchhoff. Hertz, and Poincaré developed a program for the study of nature that was quite different from that of the original creators of classical physics.

The seventeenth-century view of physics as a philosophy of nature or a natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the unmathematical descriptions. Since the doctrine of positivism, assumes that knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality reveals in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

Kant was to argue that the earlier assumption that our knowledge has the world is a mathematical physics and is wholly determined by the behaviour of physical reality could well be false. Perhaps, he said that the reverse was true - that the objects of nature conform to our knowledge of nature. The relevance of the Kantian position was later affirmed by the leader of the Berlin school of mathematics, Karl Weierstrass, who came to a conclusion that would also be adopted by Einstein - that mathematics is a pure creation of the human mind.

A complete history of the debate over the epistemological foundation of mathematical physics should probably begin with the discovery of irrational numbers by the followers of Pythagoras, the paradoxes of Zeno and Gottfried Leibniz. But since we are more concerned with the epistemological crisis of the late nineteenth century, let us begin with the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1897, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and a distinguishable object in thought or perception conceived as a whole.

Cantor attempted to prove that the proceeds of counting and the definition of integers could be placed on a solid mathematical foundation. His method was repeatedly to place the elements in one set into ‘one-to-one’ correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integer

(2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.

Formidably, Cantor discovered that some infinite sets were larger than others and that infinite sets formed a hierarchy of ever greater infinities. After this failing attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly solid foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.

In 1886, Nietzsche was delighted to learn the classical view of mathematics as a logical consistent and self-contained system that could prove it might be undermined. And his immediate and unwarranted conclusion was that all of logic and the whole of mathematics were nothing more than fictions perpetuated by those who exercised their will to power. With his characteristic sense of certainty, Nietzsche does precisely proclaim. “Without accepting the fictions of logic, without measuring reality against the purely invented world of the unconditional and self-identical, without a constant falsification of the world by means of numbers, man could not live.”

Many writers, along with a few well-known new-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.

Research in neuroscience has shown that language processing is a staggering complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca’s area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke’s area, by the auditory cortex, is associated with sound analysis in the sequencing of words.

Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, the cerebellum was thought to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.

Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.

The fist stone choppers appear in their fossil executions seem as the remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis - the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool makers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.

The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. This adaption was enhanced over time by increased connectivity to brain regions involved in language processing.

Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.

The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system.

The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the world. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.

Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. And it is now clear that language processing is not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. And Darwinian evolution can also explain why selective pressures in this new ecological niche favoured preadaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, it seems reasonable to conclude that this reality is greater than the sum of its parts. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. And no scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. And while one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. But it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

The scientific implications to the relationship between parts (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of a human observer or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet, those of which are responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the confusing new fact of nature called non-locality cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what be most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, less, then there were fewer in amounts of back-ground implications should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close of its circle, resolve the equations of eternity and complete the universe made obtainable to gain into the profound mysteriousness through which its unification holds itself there-within.

Based on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds. It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger undissectible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. But it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.

The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view “essences” underlying and unifying physical reality as if they were “substances.”

The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into “customary points of view and forms of perception.” The framers of classical physics derived, like the rest of us there, “customary points of view and forms of perception” from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.

A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: “We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.”

It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in ontology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the elemental answer to the question and the ultimate meaning of the riddle is and probably will always be, a matter of personal choice and conviction, in that the finding by some conclusive evidences that openly evince its question, is, much less, that the riddle, is precisely and explicitly relationally found that of, least of mention, a requiring explication that evokes of an immediate introduction for which is the unanswerable representation thereof. In that of its finding as such, their assembling to gather by some inspiring of formidable combinations awaiting the presence to the future. Wherefore, in its secretly enigmatically hidden reservoir lay of the continuous phenomenons, in that, for we are to discover or rediscover upon which the riddle has to undertake by the evincing properties that bind all substantive quantifications raised of all phenomena that adhere to the out-of-the-ordinary endlessnes. That once found might that we realize that its answer belongs but to no man, because once its riddle is solved the owing results are once-more, the afforded efforts gainfully to employ in the obtainable acquirements for which categorize in all of what we seek. In that, the self-naming proclamation belongs only to an overflowing Nothingness, whereby its own bleeding is to call for that which speaks of Nothing. Subsequently, there remains are remnant infractions whose fragments also bleed from their pours as Nothing, for Nothingness means more than Nothingness. If, only to recover in the partialities that unify consciousness, but, once, again, the continuous flow of Nothing gives only to itself the vacuousness that Nothingness belongs of an unchanging endlessness.

Our frame reference point works mostly to incorporate in an abounding classical set affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of “consciousness” and remains in a certain state of our study.

But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing of the “I” in the stark Cartesian division between mind and world that some have rather aptly described as “the disease of the Western mind.”

Following the fundamental explorations that include questions about knowledge and the intuitive certainty by which but even here the epistemic concepts involved, as this aim is to provide a unified framework for understanding the universe. That in giving the immaterial essences that gave form and structure to this universe were being coded in geometrical and mathematical ideas. And this insight led him to invented algebraic geometry.

A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.

The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which “solved the riddle of the universe,” but only to replace it by another riddle: The riddle of itself. Yet, we discover the “certain principles of physical reality,” said Descartes, “not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.” Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.

Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally “revealed” truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the “hidden ontology of classical epistemology.” Descartes lingers in the widespread conviction that science does not provide a “place for man” or for all that we know as distinctly human in subjective reality.

The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.

A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.

Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do justly that in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781) (paralogisms are faulty inferences about the nature of the mind). The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.

The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.

It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.

To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, Qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.

It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.

When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.

And, if, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.

This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (œ p, Q)(P & p
q
q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.

Confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for auto-graphical memories and moral self-understanding. These are ways of thinking about myself.

Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.

What is an, “I”-thought?” Obviously, an “I”-thought is a thought that involves self-reference. I can think an “I”-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but am not “I”-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an “I”-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in the down-town area in Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And say I am not thinking genuine “I”-thoughts, because one cannot think a genuine “I”-thought if one is ignorant that one is thinking about one’s self. So it is natural to conclude that “I”-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun “I,” because one cannot be the first-person pronoun without knowing that one is thinking about oneself.

This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun “I” or, indirectly by means of the direct reflexive pronoun “he,” such they are first-person contents.

The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word “I” (or, “my”) which is called “the use as object” and “the use as subject.” Examples of the first kind of use are these” “My arm is broken,” “I have grown six inches,” “I have a bump on my forehead,” “The wind blows my hair about.” Examples of the second kind are: “I see so-and-so,” “I try to lift my arm,” “I think it will rain,” “I have a toothache.”

The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing a person when I say I have toothache. To ask “are you sure that its you who have pains?” Its one and only, would be nonsensical.

Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of “I” as object, can be analysed in terms of more basic propositions. Such that the thought “I am B” involves such a use of “I.” Then we can understand it as a conjunction of the following two thoughts” “a is B” and “I am a.” We can term the former a predication component and the latter an identification component. The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is ‘B’, even though mistaken in identifying oneself as that person.

To say that a statement “a is B” is subject to error through misidentification relative to the term “a” means that the following is possible: The speaker knows some particular thing to be “B,” but makes the mistake of asserting “a is B” because, and only because, he mistakenly thinks that the thing he knows to be “B” is what “a” refers to.

The give direction to, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker’s criterion of immunity to error through misidentification relative to the first-person pronoun (simply “immunity to error through misidentification”) is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.

The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

To say that a statement “a is B” is subject to error through misidentification relative to the term “a” means that some particular thing is B, because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is B” because, and only because, he mistakenly thinks that the thing he justified believes to be ‘B’ is what “a” refers to.

Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification than knowledge. The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

A suggestive definition is to enounce that a statement “a is b” is subject to error through misidentification relative to the term “a” means that the following is possible: The speaker is warranted in believing that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.

First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.

All first-person consent subject to error through misidentification contains an identification component of the form “I am a” and employ of the first-person-pronoun contents with an identification component and those without such a component. in that identification component, does it or does it not have an identification component? Acquitted by the pain of some infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification component, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.

It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, are motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.

Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicit those principles, regulating our use of language, which we already implicitly grasp.

Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.

As it stands, the problem between to different roles that the pronoun “he” can play of such oracle clauses. On the one hand, “he” can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that “he,” is functioning as a quasi-indicator. Then when “he” is functioning as a quasi-indicator, it is written as “he.” Others have described this as the indirect reflexive pronoun. When “he” is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of opacity need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.

A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has been distinguishing psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. but this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. After all, it is also an essential feature of the physical environment that it is composed of an object that has both primary and secondary qualities, but there is a reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which there course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.

But the main criteria of his work is ver much that the dependence holds equally in the opposite direction.

It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts upon and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like “I perceive such and such, such and such holds at ‘P’; so (probably) am ‘P’ and “I’’: am ‘P?’, such does not hold at ‘P’, so I cannot really be perceiving such and such, even though it appears that I am” (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of any specification of the precise forms of behaviour that can only be explained by there ascription to a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.

Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he the notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning engaged over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.

In short, any learned cognitive ability is contractible out of more primitive abilities already in existence. There are good reasons to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitivities, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.

Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in paralinguistic or nonlinguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.

The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place of one’s birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.

The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these elements must be considered together emerges from a point made from which the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinctness from the physical environment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. After all, it is also an essential feature of the physical environment that it is composed of objects that have both primary and secondary qualities, but there is no reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

The general idea is very powerful, that the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world.

The very idea of perceivable, objective spatial wold bings it the idea of the subject for being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.

One possible reaction to consciousness, is that it is erroneously only because unrealistic and ultimately unwarranted requirements are being placed on what is to count as genuinely self-referring first-person thoughts. Suppose for such an objection will be found in those theories that attempt to explain first-person thoughts in a way that does not presuppose any form of internal representation of the self or any form of self-knowledge. Consciousness arises because mastery of the semantics of the first-person pronoun is available only to creatures capable of thinking first-person thoughts whose contents involve reflexive self-reference and thus, seem to presuppose mastery of the first-person pronoun. If, thought, it can be established that the capacity to think genuinely first-person thoughts does not depend on any linguistic and conceptual abilities, then arguably the problem of circularity will no longer have purchase.

There is no account of self-reference and genuinely first-person thought that can be read in a way that poses just such a direct challenge to the account of self-reference underpinning the conscious. This is the functionalist account, although spoken before, the functionalist view, reflexive self-reference is a completely unmysterious phenomenon susceptible to a functional analysis. Reflexive self-reference is not dependent upon any antecedent conceptual or linguistic skills. Nonetheless, the functionalist account of a reflexive self-reference is deemed to be sufficiently rich to provide the foundation for an account of the semantics of the first-person pronoun. If this is right, then the circularity at which consciousness is at its heart, and can be avoided.

The circularity problems at the root of consciousness arise because mastery of the semantics of the first-person pronoun requires the capacity to think fist-person thoughts whose natural expression is by means of the first-person pronoun. It seems clear that the circle will be broken if there are forms of first-person thought that are more primitive than those that do not require linguistic mastery of the first-person pronoun. What creates the problem of capacity circularity is the thought that we need to appeal to first-person contents in explaining mastery of the first-person pronoun, whereby its containing association with the thought that any creature capable of entertaining first-person contents will have mastered the first-person pronoun. So if we want to retain the thought that mastery of the first-person pronoun can only be explained in terms of first-person contents, capacity circularity can only be avoided if there are first-person contents that do not presuppose mastery of the first-person pronoun.

On the other hand, however, it seems to follow from everything earlier mentioned about “I”-thoughts that conscious thought in the absence of linguistic mastery of the first-person pronoun is a contradiction in terms. First-person thoughts have first-person contents, where first-person contents can only be specified in terms of either the first-person pronoun or the indirect reflexive pronoun. So how could such thoughts be entertained by a thinker incapable of a reflexive self-reference? How can a thinker who is not capable of reflexively reference? How can a thinker who is not the first-person pronoun be plausibly ascribed thoughts with first-person contents? The thought that, despite all this, there are real first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1089). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms a “subjective belief,” that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective belief that he offers makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account he constructs an account of conscious subjective beliefs and the of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are casually derivable from basic subjective beliefs.

Historically, Heidegger' theory of spatiality distinguishes three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatial conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represented to ourselves as "space." This familiarity with which are rendered the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.

Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be re-described from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.

The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal within our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all having different regions that organizes our activities and conceptualized “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.

In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - "de-severance" (Ent-fernung) and "directionality" (Ausrichtung).

De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"

We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply alternating locations from points ‘A’ to ‘B’ in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.

This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.

De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality are established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.

Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) The ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) World-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) originary temporality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.

We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of a temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, is the inauthenticity founded within a temporality that must in themselves be set up in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and originary temporality.

Dasein's authentic temporality is the "resolute" mode of temporal existence. An authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.

In contrast, an originary temporality is the formal structure of Dasein's temporality itself. In addition to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this coming orientation regulates our concern by constantly realizing various possibilities. A temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstasies of the temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporalities.

There are some passages that indicate that authentic temporality is the primary manifestation of the temporality, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, authentic to the temporality is said to find of other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.

However, to argue with Blattner and Haar, in that there are far more passages where Heidegger considers an originary temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: The temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of temporality. Time is primordial as the temporalizing of a temporality, and as such it makes possible the Constitution of the structure of care.

Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatic-horizontals unity - that we can project, authentically or inauthentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality.

Since Heidegger uses the term "temporality" rather than "an authentic temporality" whenever the founding relation is discussed between space and time, I will begin the following analysis by assuming that it is originary temporality that founds Dasein's spatiality. On this assumption two interpretations of the argument are possible, but both are unsuccessful given his phenomenological framework.

The principal argument, entitled "The Temporality of the Spatiality that is Characteristic of Dasein." Heidegger begins the section with the following remark: Though the expression `temporality' does not signify what one understands by "time" when one talks about `space and time', nevertheless spatiality seems to make up another basic attribute of Dasein corresponding to temporality. Thus with Dasein's spatiality, existential-temporal analysis seems to come to a limit, so that this entity that we call "Dasein," must be considered as `temporal' `and' as spatial coordinately.

Accordingly, Heidegger asks, "Has our existential-temporal analysis of Dasein thus been brought to a halt . . . by the spatiality that is characteristic of Dasein . . . and Being-in-the-world?" His answer is no. He argues that since "Dasein's constitution and its ways to be are possible ontologically only on the basis of temporality," and since the "spatiality that is characteristic of Dasein . . . belongs to Being-in-the-world," it follows that "Dasein's specific spatiality must be grounded in temporality."

Heidegger's claim is that the totality of regions-de-severance-directionality can be organized and re-organized, "because Dasein as temporality is ecstatic-horizontals in its Being." Because Dasein exists futurely as "for-the-sake-of-which," it can discover regions. Thus, Heidegger remarks: "Only on the basis of its ecstatic-horizontals temporality is it possible for Dasein to break into space."

However, in order to establish that temporality founds spatiality, Heidegger would have to show that spatiality and temporality must be distinguished in such a way that temporality not only shares a content with spatiality but also has additional content as well. In other words, they must be truly distinct and not just analytically distinguishable. But what is the content of "the ecstatic-horizontals constitution of temporality?" Does it have a content above and beyond Being-in-the-world? Nicholson poses the same question as follows: Is it human care that accounts for the characteristic features of a humanistic temporality? Or is it, as Heidegger says, human temporality that accounts for the characteristic features of human care, serves as their foundation? The first alternative, according to Nicholson, is to reduce temporality to care: "the specific attributes of the temporality of Dasein . . . would be in their roots not aspects of temporality but reflections of Dasein's care." The second alternative is to treat temporality as having some content above and beyond care: "the three-fold constitution of care stems from the three-fold constitution of temporality."

Nicholson argues that the second alternative is the correct reading.18 Dasein lives in the world by making choices, but "the ecstasies of temporality lies well prior to any choice . . . so our study of care introduces us to a matter whose scope outreaches care: the ecstasies of temporality itself." Accordingly, "What was able to make clear is that the reign of temporal ecstasies over the choices we make accords with the place we occupy as finite beings in the world."

But if Nicholson's interpretation is right, what would be the content of "the ecstasies of the temporality itself," if not some sort of purely formal entity or condition such as Kant's "pure intuition?" But this would imply that Heidegger has left phenomenology behind and is now engaging in establishing a transcendental framework outside the analysis of Being-in-the-world, such that this formal structure founds Being-in-the-world. This is inconsistent with his initial claim that Being-in-the-world is itself foundational.

Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs as itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projections of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."

However, if this is the case, then temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with temporality. The fulfilling contents of “temporality temporalizing itself” simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatially-transient phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.

If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatic-horizontals in its Being, it can take along with it a space for which it has made room, and it can do so farcically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.

However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions the authenticity of temporalities, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authentic temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.

Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. Authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenological ly in terms of the concrete experience of decision.

Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what makes Dasein "stands before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being towards a possibility - indeed, towards a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.

However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibilities were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat of the proposed authenticity that corragulates of temporality is the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.

Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world from the standpoint of its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.

Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.

The elements of a more consistent interpretation of authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of authentic temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, lives in a particular place, dies in a particular place, all of which it can relate to in an authentic way. The place Dasein lives are not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.

The internalized tensions within his theory eventually led Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.

But the idea of dwelling must accede to the fact that has already been discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "in-an" - to "reside," "habits are," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.

Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.

Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self” - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.

Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”

Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.

The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.

On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”

Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.

It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.

Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.

We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).

The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is - how else can we put it? - consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.

Historically, Heidegger' theory of spatiality distinguishes three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatializing conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represent to ourselves as "space." This familiarity is what renders the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.

Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be re-described from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.

The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal with in our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all have different regions that organize our activities and contexualize “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.

In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - "de-severance" (Ent-fernung) and "directionality" (Ausrichtung).

De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"

We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply changing locations from point A to B in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.

This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.

De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality be established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.

Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) the ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) world-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) originary temporality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.

We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, inauthentic temporality must itself be founded in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and originary temporality.

Dasein's authentic temporality is the "resolute" mode of temporal existence. An authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.

In contrast, to the originary temporality, for which the formal structure of Dasein's temporality itself is grounded to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this outcome orientation regulates our concern by constantly realizing various possibilities. Temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstasies of temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporality.

There are some passages that indicate that authentic temporality is the primary manifestation of the temporality, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, the authentic temporality is said to found other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.

However, to ague with Blattner and Haar, in that there are far more passages where Heidegger considers an originary temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: A temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of the temporality. Time is primordial as the temporalizing of temporality, and as such it makes possible the Constitution of the structure of care.

Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatic-horizontal unity - that we can project, authentically or in authentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality. Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs to temporality itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projection of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."

However, if this is the case, then the temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with a temporality. The sustaining of "temporality temporalizing itself" simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatio-temporal phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.

If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatic-horizontals in its Being, it can take along with it a space for which it has made room, and it can do so farcically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.

However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions authentic temporality, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authenticated temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.

Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. An authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world rather than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenologically in terms of the concrete experience of decision.

Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what compels Dasein to "stand before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being towards a possibility - indeed, towards a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.

However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibility were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat authentic temporality as the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.

Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world from the standpoint of its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.

Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.

The elements of a more consistent interpretation of an authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of its authenticated temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, lives in a particular place, dies in a particular place, all of which it can by its relation to in an authenticated process. The place Dasein lives is not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.

The internalized tensions within his theory leads Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.

But the idea of dwelling is in fact already discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "in-an" - to "reside," "habit are," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.

Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.

The return to the conception of spatial finitude in the later period shows that Heidegger never abandoned the original insight behind his conception of Being-in-the-world. But once committed to this idea, it is hard to justify singling out an aspect of the self -temporality - as the foundation for the rest of the structure. All of the Existentiale and zuhanden modes, which constitute the whole of Being-in-the-world, are equiprimordial, each mode articulating different aspects of a unified whole. The preference for temporality as the privileged meaning of existence reflects the Kantian residue in Heidegger's early doctrine that he later rejected as still excessively subjectivistic.

Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self” - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.

Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”

Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.

The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.

On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”

Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.

It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.

Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.

We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).

The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is - how else can we put it? - consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.

Some theorists would insist that this last statement be qualified. While such a representation of an object may provide everything that a representation has to have for its contents to be like something to me, they would urge, something more is needed. Different theorists would add different elements. For some, I would have to be aware, not just of the object, but of my representation of it. For others, I would have directorial implications that infer of the certain attentive considerations to its way or something other than is elsewhere. We cannot go into this controversy here. As, we are merely making the point that consciousness of objects is more than Form's A-consciousness.

Consciousness self involves, not just consciousness of states that it is like something to have, but consciousness of the thing that has them, i.e., of ones-self. It is the ability to process and act responsively to information about oneself, but again it is more than that. It is knowing about, being conscious of, oneself, indeed of itself as itself. And consciousness of oneself in this way it is often called consciousness of self as the subject of experience. Consciousness of oneself as oneself seems to require indexical adeptness and by preference to a special indexical ability at that, not just an ability to pick out something out but to pick something out as oneself. Human beings have such self-referential indexical ability. Whether any other creatures have, it is controversial. The leading nonhuman candidate would be chimpanzees and other primates whom they have taught enough language to use first-person pronouns.

The literature on consciousness sometimes fails to distinguish consciousness of objects, consciousness of one's own representations, and consciousness of self, or treat one three, usually consciousness of one's own representations, as actualized of its owing totality in consciousness. (Conscious states do not have objects, yet is not consciousness of a representation either. We cannot pursue that complication here.) The term ‘conscious’ and cognates are ambiguous in everyday English. We speak of someone regaining consciousness - where we mean simple consciousness of the world. Yet we also say things like, She was haphazardly conscious of what motivated her to say that - where we do not mean that she lacked either consciousness of the world or consciousness of self but rather than she was not conscious of certain things about herself, specifically, certain of her own representational states. To understand the unity of consciousness, making these distinctions is important. The reason is this: the unity of consciousness takes a different form in consciousness of self than it takes in either consciousness of one's own representations or consciousness of objects.

So what is unified consciousness? As we said, the predominant form of the unity of consciousness is being aware of several things at the same time. Intuitively, this is the notion of several representations being aspects of a single encompassing conscious state. A more informative idea can be gleaned from the way philosophers have written about unified consciousness. As emerging from what they have said, the central feature of unified consciousness is taken to be something like this unity of consciousness: A group of representational relations related to each other that to be conscious of any of them is to be conscious of others of them and of the group of them as a single group.

In order for science to be rigorous, Husserl claimed that mind must ‘intend’ itself as subject and also all its ‘means’. The task of philosophy, also, is so, that in to substantiate that science is, in fact, rigorous by clearly distinguishing, naming, and taxonomizing phenomena. What William James termed the stream of consciousness was dubbed by Husserl the system of experience. Recognizing, as James did, that consciousness is contiguous, Husserl eventually concluded that any single mental phenomenon is a moving horizon receding in all directions at once toward all other phenomena.

Interesting enough, this created an epistemological dilemma that became pervasive in the history of postmodern philosophy. the dilemma is such that if mind ‘intends’ itself as subject and objects within this mind are moving in all directions toward all other objects, how can any two minds objectively agree that they are referring to the same object? The followers of Husserl concluded that this was not possible, therefore, the prospect that two minds can objectively or inter-subjectively know the same truth is annihilated.

Ever so, that it is ironic, least of mention, that Husserl’s attempt to establish a rigorous basis for science in human consciousness served to reinforce Nietzsche’s claim that truths are evolving fictions that exist only in the subjective reality of single individuals. And it also massively reinforced the stark Cartesian division between mind and world by seeming to legitimate the view that logic and mathematical systems reside only in human subjectivity and, therefore, that there is no real or necessary correspondence of physical theories with physical reality. These views would later be embarked by Ludwig Wittgenstein and Jean-Paul Sartre.

One of Nietzsche’s fundamental contentions was that traditional value (represented primarily by Christianity) had lost their power in the lives of individuals. He expressed this in his proclamation “God is dead.” He was convinced that traditional values represented “slave morality,” such that it was the characterological underpinning with which succeed too weakly and resentful individually created morality. Who encouraged such behaviour as gentleness and kindness because the behaviour served their interests?

By way of introducing some of Nietzsche’s written literature, which it may as such, by inclination alone be attributively contributive that all aspiration’s are in fact the presentation of their gestural point reference. A few salient points that empower Nietzsche as the “great critic” of that tradition, in so that by some meaningfully implication, is to why this critique is potentially so powerful and yet as provocative by statements concerting the immediacy of its topic.

Although enwrapped in shrouds his guising shadow that which we can identify Nietzsche in a decisive challenge to the past, from one point of view there should be nothing too remarkably new about what Nietzsche is doing, least of mention, his style of doing so is very intriguing yet distinctive. For him, undertaking to characterized methods of analysis and criticism, under which we should feel quite familiar with, just as the extracted forms of familiarity are basic throughout which contextual matters of representation have previously been faced. He is encouraging as a new possibility for our lives of a program that has strong and obvious roots in certain forms of Romanticism. Thus, is to illustrate how the greater burden of tradition, as he is deeply connected to categorical priorities as in the finding that considerations for which create tradition.

Irish philosopher and clergyman George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterialist” theory, part of the school of thought known as idealism.

The German philosopher Immanuel Kant tried to solve the crisis generated by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can have an exact and certain opening for knowledge, but he followed the empiricists in holding that such knowledge is more informative about the structure of thought than about the worlds’ outside thought. He distinguished three kinds of knowledge, analytical deductions, for which is exact and certain but uninformative, because it makes clear only what is contained in definitions; Synthetic empirically, which conveys information about the world learned from experience, but is subject to the errors of the senses. Theoretical synthetics, which are discovered by pure intuitive certainty, are both exact and understanding. Its expressions are the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as theoretic synthetical knowledge really exists.

Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to nearly all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.

Most philosophers since Plato have held that the highest ethical good are the same for everyone; as far as one is to approach moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche additionally contended with an individuality that must define for which situations are to count as moral situations.

All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible for reason and the accessible knowledge as cohered by supporting structures of scientific understanding, in that they have argued that even science is not as rational as is commonly supposed. For instance, asserted that the scientific assumption of an orderly universe is for the most part a worthwhile rationalization.

Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do: Yet, to every human that make choices that create his or her own natures embark upon the dogma that which, in its gross effect, formulates his or hers existential decision of choice. That if, one might unduly sway to consider in having to embody the influences that persuade one’s own self to frowardly acknowledge the fact of an existence that precedes the idealization pertaining to its essences. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.

Kierkegaard held that recognizing that one experience is spiritually crucial not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to agree to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger - anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.

Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries. However, elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many pre-modern philosophers and writers.

The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life as for paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.

Nineteenth-century Danish philosopher Søren Kierkegaard played a major role in the development of existentialist thought. Kierkegaard criticized the popular systematic method of rational philosophy advocated by German Georg Wilhelm Friedrich Hegel. He emphasized the absurdity inherent in human life and questioned how any systematic philosophy could apply to the ambiguous human condition. In Kierkegaard’s deliberately unsystematic works, he explained that each individual should attempt an intense examination of his or her own existence.

Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual, therefore, must always be prepared to defy the norms, least of mention, for which any if not all sociological associations that bring of some orientation, that for the sake of the higher persuasion brings the possible that implicate of a personally respective way of life. Kierkegaard ultimately advocated a “leap of faith” into a Christian way of life, which, although hard to grasp and fully in the risk of which was the only commitment he believed could save the individual from despair.

Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G.W.F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. The literaturized work of Fear and Trembling, 1846 and translated, 1941, Kierkegaard explored the conceptual representations of faith through which an examination of the biblical story of Abraham and Isaac, under which God demanded that Abraham show by his proving of faith by sacrificing his son.

One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated through Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what termed the “slave morality” of traditional values, and lived according to his own morality. Who also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.

Nietzsche, who was not conversant with the functional dynamics that were the contributive peculiarities for which their premise is attributable to Kierkegaard. The influence of the subsequential existentialist thought, only through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.

The “will” (philosophy and psychology), is the capacity to choose among alternative courses of action and to act on the choice made, particularly when the action is directed toward a specific goal or is governed by definite ideals and principles of conduct? Bestowing the consignment of willed behaviour contrasts with behaviour stemming from instinct, impulse, reflex, or habit, none, of which involves conscious choice among alternatives. Again, a consigning of willed behaviour contrasts with the vacillations manifested by alternating choices among conflicting alternatives.

Until the 20th century most philosophers conceived the will as a separate faculty with which every person is born. They differed, however, about the role of this faculty in the personality makeup. For one school of philosophers, most notably represented by the German philosopher Arthur Schopenhauer, universal will-power is the primary reality, and the individual's will forms part of it. In his view, the will dominates every other aspect of an individual's personality, knowledge, feelings, and direction in life. A contemporary form of Schopenhauer's theory is implicit in some forms of existentialism, such as the existentialist view expressed by the French philosopher Jean-Paul Sartre, which regards personality as the desire to action, and actions as they are the manifestations of the will for which gives meaning to the universe.

Most other philosophers have regarded the will as coequal or secondary to other aspects of personality. Plato believed that the psyche is divided into three parts: Reason, will, and desire, for rationalist philosophers, such as Aristotle, Thomas Aquinas, and René Descartes. The will is the agent of the rational soul in governing purely animal appetites and passions. Some empirical philosophers, such as David Hume, discount the importance of rational influences upon the will; They think of the will as ruled mainly by emotion. Evolutionary philosophers, such as Herbert Spencer, and pragmatist philosophers, such as John Dewey, conceive the will not as an innate faculty but as a product of experience evolving gradually as the mind and personality of the individual development in social interaction.

Modern psychologists tend to accept the pragmatic theory of the will. They regard the will as an aspect or quality of behaviour, than as a separate faculty. It is the whole person who wills. This act of willing is manifested by (1) the fixing of attention on distant goals and abstract standards and principles of conduct; (2) the weighing of alternative courses of action and the taking of deliberate action that seems best calculated serving specific goals and principles; (3) the inhibition of impulses and habits that might distract attention from, or otherwise conflict with, a goal or principle; and (4) perseverance against deterrents and the obstruction, that within one’s pursuit of goals or adherence is given into the characteristic principles.

The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).

Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - as Max Scheler (1874-1928), the German social and religious philosopher, whose work reflected the influence of the phenomenology of his countryman Edmund Husserl. Born in Munich, Scheler taught at the universities of Jena, Munich, and Cologne. In The Nature of Sympathy, 1913 translated 1970, he applied Husserl's method of detailed phenomenological description to the social emotions that relate human beings to one another - especially love and hate. This book was followed by his most famous work, Formalism in Ethics and Non-Formal Ethics of Values, 1913, and translated 1973, a two-volume study of ethics in which he criticized the formal ethical approach of the German philosopher Immanuel Kant and substituted for it a study of specific values as they directly present themselves to consciousness. Scheler converted to Roman Catholicism in 1920 and wrote On the Eternal in Man, 1921 and translated 1960, to justify his conversion, followed by an important study of the sociology of knowledge, Die Wissensformen und die Gesellschaft (Forms of Knowledge and Society, 1926). Later he rejected Roman Catholicism and developed a philosophy, based on science, in which all abstract knowledge and religious values are considered sublimations of basic human drives. This is presented in his last book, The Place of Man in the Universe, 1928 translated 1961.

Phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible and indifferent world. Human beings can never hope to understand why they are here; Instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on Being and ontology and on language.

The subjects treated in Aristotle's Metaphysics (substance, causality, the nature of being, and the existence of God) fixed the content of metaphysical speculation for centuries. Among the medieval Scholastic philosophers, metaphysics were known as the “transphysical science” on the assumption that, by means of it, the scholar philosophically could make the transition from the physical world to a world beyond sense perception. The 13th-century Scholastic philosopher and theologian St. Thomas Aquinas declared that the cognition of God, through a causal study of finite sensible beings, was the aim of metaphysics. With the rise of scientific study in the 16th century the reconciliation of science and faith in God became an increasingly important problem.

The Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that everything, that human beings were to conceive of exists as an idea in a mind, a philosophical focus that is idealism. Berkeley reasoned that because one cannot control one’s thoughts, they must come directly from a larger mind: That of God. In his treatise, Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is “impossible . . . that there should be any such thing as an outward object.”

Before the time of the German philosopher Immanuel Kant’s metaphysics was characterized by a tendency to construct theories based on deductive knowledge, that is, knowledge derived from reason alone, in the contradistinction to empirical knowledge, which is gained by reference to the facts of experience. From theoretical knowledge were deduced general propositions held to be true of all things. The method of inquiry based on deductive principles is known as rationalistic. This method may be subdivided into monism, which holds that the universe is made up of a single fundamental substance: Dualism, is nonetheless, the belief in two such substances, and pluralism, which proposes the existence of many fundamental substances.

In the 5th and 4th centuries Bc, Plato postulated the existence of a realm of Ideas that the varied objects of common experience imperfectly reflect. He maintained that these ideal Forms are not only more clearly intelligible but also more real than the transient and essentially illusory objects themselves.

George Berkeley is considered the founder of idealism, the philosophical view that all physical objects are dependent on the mind for their existence. According to Berkeley's early 18th-century writing, an object such as a table exists only if a mind is perceiving it. Therefore, objects are ideas.

Berkeley speculated that all aspects of everything of which one is conscious are reducible to the ideas present in the mind. The observer does not conjure external objects into existence, however, the true ideas of them are caused in the human mind directly by God. Eighteenth-century German philosopher Immanuel Kant greatly refined idealism through his critical inquiry into what he believed to be the limit of possible knowledge. Kant held that all that can be known of things is the way in which they appear in experience, there is no way of knowing what they are substantially in themselves. He also held, however, that the fundamental principles of all science are essentially grounded in the constitution of the mind than being derived from the external world.

George Berkeley, argued, that all naturalized associations brought upon the human being to conceive of existent and earthly ideas within the mind, a philosophical focus that is known as idealism.

Trying to develop an all-encompassing philosophical system, German philosopher Georg Wilhelm Friedrich Hegel wrote on topics ranging from logic and history to art and literature. He considered art to be one of the supreme developments of spiritual and absolute knowledge, surpassed only by religion and philosophy. In his excerpt from Introductory Lectures on Aesthetics, which were based on lectures that Hegel delivered between 1820 and 1829, Hegel discussed the relationship of poetry to other arts, particularly music, and explained that poetry was one mode of expressing the “Idea of beauty” that Hegel believed resided in all art forms. For Hegel, poetry was “the universal realization of the art of the mind.”

Nineteenth-century German philosopher Georg Wilhelm Friedrich Hegel disagreed with Kant's theory concerning the inescapable human ignorance of what things are in themselves, instead arguing for the ultimate intelligibility of all existence. Hegel also maintained that the highest achievements of the human spirit (culture, science, religion, and the state) are not the result of naturally determined processes in the mind, but are conceived and sustained by the dialectical activity.

Hegel applied the term dialectic to his philosophic system. Hegel believed that the evolution of ideas occurs through a dialectical process - that is, a conceptual lead to its opposite, and because of this conflict, a third view, the synthesis, arises. The synthesis is at a higher level of truth than the first two views. Hegel's work is based on the idealistic conceptualized representation of a universal mind that, through evolution, seeks to arrive at the highest level of self-awareness and freedom.

German political philosopher Karl Marx applied the conceptualize representation of dialectic social and economic processes. Marx's so-called dialectical materialism, frequently considered a revision of the Hegelian, dialectic of free, reflective intellect. Additional strains of idealistic thought can be found in the works of 19th-century Germans Johann Gottlieb Fichte and F.W.J. Schelling, 19th-century Englishman F.H. Bradley, 19th-century Americans Charles Sanders Peirce and Josiah Royce, and 20th-century Italian Benedetto Croce.

The monists, agreeing that only one basic substance exists, differ in their descriptions of its principal characteristic. Thus, in idealistic monism the substance is believed to be purely mental; in materialistic monism it is held to be purely physical, and in neutral monism it is considered neither exclusively mental nor solely physical. The idealistic position was held by the Irish philosopher George Berkeley, the materialistic by the English philosopher Thomas Hobbes, and the neutral by the Dutch philosopher Baruch Spinoza. The latter expounded a pantheistic view of reality in which the universe is identical with God and everything contains God's substance.

George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterial ist” theory, part of the school of thought known as idealism, to a more general audience in Three Dialogues between Hylas and Philonous (1713).

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of many distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from some deductive principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This, nonetheless, is basically known as skepticism or agnosticism, in that their appreciation of the soul and the reality of God.

Immanuel Kant had circulated his thesis on, The Critique of Pure Reason in 1781. Three years later he expanded on his study of the modes of thinking with an essay entitled “What is Enlightenment?” In this 1784 essay, Kant challenged readers to “dare to know,” arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called Transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience and it is rationalistic in that it maintains the deductive character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to comprehend an experience only makes these categories or structural principle’s transcendental. They transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only as far as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality makes up the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these conceptualized understandings were brought through the moral faith than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Some of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism opposing Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories is radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer, emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism suspects that Will is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed in actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.

In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory, a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as “Nothing exists except material particles” and “Everything is part of one all-encompassing spirit” cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning about human hopes and feelings.

The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves strangling unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individual's relationship to it are extremely important and meaningful concerning human life. The investigation of these questions is therefore considered valid of whether or not its results can be verified objectively.

Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. In the United States, metaphysics have been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine, wherefore Sellars has aspired to express metaphysical questions in linguistic terms, and Quine has attempted to decide whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.

Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Considerable amounts of Sartre’s workings focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.

Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; He declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless, insisted that his existentialism be a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history. Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible; and because human action consists of a direct grasp of objects, it is not necessary to posit a special mental entity called a meaning to account for intentionality. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

In the mid-1900s, French existentialist Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and free consciousnesses. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologist, in that he denies the possibility of bracketing existence.

In the treatise Being and Nothingness, French writer Jean-Paul Sartre presents his existential philosophical framework. He reasons that the essential nothingness of human existence leaves individuals to take sole responsibility for their own actions. Shunning the morality and constraints of society, individuals must embrace personal responsibility to craft a world for themselves. Along with focussing on the importance of exercising individual responsibility, Sartre stresses that the understanding of freedom of choice is the only means of authenticating human existence. A novelist and playwright as well as a philosopher, Sartre will become a leader of the modern existentialist movement.

Although existentialist thought encompassing the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard, foreshadowed its profound influence on 20th-century theologies. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced a contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian’s Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.

Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.

Twentieth-century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with a dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.

Several existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; Only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”

The unfolding narrations that launch the chronological lines are attributed to the Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864) -“I am a sick man . . . I am a spiteful man”- are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality plus the foundations of rational thinking.

In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial 1925, translated, 1937, and The Castle (1926, translated, 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and, the influence of Nietzsche is also discernible in the novels of the French writer’s André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffused, traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.

Nietzsche’s concept has often been interpreted as one that postulates a master-slave society and has been identified with totalitarian philosophies. Many scholars deny the connection and attribute it to misinterpretation of Nietzsche 's work.

For him, an undertaking to characterize its method of analysis and criticism, under which we should feel quite familiar with, just as the extracted forms of familiarity are basic throughout which contextual matters of representation have previously been faced. He is encouraging as a new possibility for our lives a program that has strong and obvious roots in certain forms of Romanticism. Thus, is to illustrate how Nietzsche, the greater burden of tradition, as he is deeply connected to categorical priorities as to finding the considerations of which make of tradition.

Yet, Kant tried to solve the crisis generated by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can have an exact and certain opening for knowledge, but he followed the empiricists in holding that such knowledge is more informative about the structure of thought than about the world outside thought.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge, and both extended the principles of empiricism to the study of society.

The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known because of the perception. The phenomenalists contended that the objects of knowledge are the same as the objects perceived. The neorealists argued that one has direct perceptions of physical objects or parts of physical objects, than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge of it.

A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things are from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there be only one kind of knowledge: Scientific knowledge; In that, any legitimate claim that is reinforced through the knowledge claim must be verifiable in experience. So that, much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes because of discussions among the logical empiricists themselves, and their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by many of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.

The latter of these recent schools of thought, generally called linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actualized directive in key epistemological terms are used-terms such as knowledge, perception, and probability - and to formulate definitive rules for their use to avoid verbal confusion.

John Austin (1911-1960), a British philosopher, a prominent figure in 20th-century analytic and linguistic philosophy, was born in Lancaster, England, he was educated at the University of Oxford. After serving in British intelligence during World War II (1939-1945), he returned to Oxford and taught philosophy until his death.

Austin viewed the fundamental philosophical task to be that of annualizing and clarifying ordinary language. He considered attention to distinctions drawn in ordinary language as the most fruitful starting point for philosophical inquiry. Austin's linguistic work led to many influential concepts, such as the speech-act theory. This arose from his observation that many utterances do not merely describe reality but also affect reality; They are the performance of some act than a report of its performance. Austin came to believe that all languages are performatives and is made up of speech acts. Seven of his essays were published during his lifetime. Posthumously published works include Philosophical Papers (1961), Sense and Sensibilia (1962), and How to Do Things with Words (1962).

Thomas Hill Green (1836-1882), British philosopher and educator, who led the revolt against empiricism, the dominant philosophy in Britain during the latter part of the 19th century. He was born in Birkin, Yorkshire, England, and educated at Rugby and the University of Oxford. He taught at Oxford from 1860 until his death, initially as a fellow and after 1878 as Whyte Professor of Moral Philosophy.

A disciple of the German philosopher Georg Wilhelm Friedrich Hegel, Green insisted that consciousness provide the necessary basis for both knowledge and morality. He argued that a person's highest good is realization and that the individual can obtainably achieve realization, only in society. Society has an obligation, in turn, to provide for the good of all its members. The political implications of his philosophy laid the basis for sweeping social-reform legislation in Britain. Besides being the most influential British philosopher of his time, Green was a vigorous champion of popular education, temperance, and political liberalism. His writings include Prolegomena to Ethics (1883) and Lectures on the Principles of Political Obligation (1895), as both liberalized materials were posthumously published.

The outcome of this crisis in economic and social thinking was the development of positive liberalism. As noted, certain modern liberals, like the Austrian-born economist Friedrich August von Hayek, consider the positive attitude an essential betrayal of liberal ideals. Others, such as the British philosophers Thomas Hill Green and Bernard Bosanquet, known as the “Oxford Idealists,” ‘devised a so-called organic liberalism designed to hinder hindrances to the good life’. Green and Bosanquet advocated positive state action to promote -fulfilment, that is, to prevent economic monopoly, abolish poverty, and secure people against the disabilities of sickness, unemployment, and old age. The identified liberalism came alongside with the extension of democracy.

Most of the philosophical discussions of consciousness arose from the mind-body issues posed by René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unexceeded (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to the awakening of consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or simplify every other. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psycho-physical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

No simple, agreed-upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. There had occasioned that the primary subject matter of psychology, consciousness as an area of study has suffered almost a total dissolution, later reemerging to become a topic of current interest.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; That is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories, sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was essentially to remove considerations of consciousness from psychological research for some fifty years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. When in a 1913 article, Watson stated, ‘I believe that we can write on the preliminaries of psychology and never use the term’s consciousness, mental states, mind . . . imagery and the like.’ Psychologists then turned almost exclusively to behaviours, as described as to stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Impelled of the 1950s, were, however, an interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness, such in sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. An increase in sleep and dream research was directly fuelled by a discovery used for the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and while the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they usually reported dreams, whereas if awakened at other times they did not. This and other research clearly suggested that sleep, once considered a passive state, were instead an active state of consciousness.

During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that was, - directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing response from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially used for those interested in consciousness and meditation, and several ‘alpha training’ programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the one person to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, compared with individual suggestibility and personality traits; The subject has now been largely demythologized, and the limitations of the hypnotic state are well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.

Many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce mental or mind distortions of conscious dialectic awareness. The most prominent of these drugs is lysergic acid diethylamide, or LSD; Mescaline and psilocybin, the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these. As the metaphysic of an orderly but simple linkage between environment and behaviour became unsatisfactory in recent decades. Interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored in the composites of memory. An entirely new area called cognitive psychology has emerged that centre on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons as to their current feelings and thoughts were important. The role of consciousness, however, was often de-emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

When the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness.

Scientists have long since considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this, a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicists Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas’s concepts of substance and accident.

In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about having exact and accurate knowledge is possible. The thing’s one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.

Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that most knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, according to the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, than as an end in it.

After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the Middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.

From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641, Meditations on First Philosophy), focussing on its distinctive use of logic and the reactions it aroused.

Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.

George Berkeley conceded with Locke who retained in the possibility of knowing that some of our ideas (those of primary qualities) give us an adequate representation of the world around us, and that the various sources of knowledge, and above all the limits and doubtful capacities of our minds. It is through this that Locke connected his epistemology with the defence of religious toleration. Nevertheless, Berkeley denied Locke's belief that a distinction can be made between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world; and knowledge of matters of fact -that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true, that of a conclusion that had a revolutionary impact on philosophy.

Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his ‘Immaterial ist’ theory, part of the school of thought known as idealism.

Immanuel Kant tried to solve the crisis hastened by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can include of an exact and certain knowledge, but he followed the empiricists in holding that such knowledge is more informative about the structure of thought than about the world outside thought. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic deductibility upon which knowledge really exists.

During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge, and both extended the principles of empiricism to the study of society.

In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known because of the perception. The phenomenalists contended that the objects of knowledge are the same as the objects perceived. The neorealists argued that one has direct perceptions of physical objects or parts of physical objects, than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge of it.

During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there be only one kind of knowledge - scientific knowledge, that any valid knowledge claim must be verifiable in experience: Consequently, that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements.

Of these recent schools of thought, generally called linguistic analysis, or ordinary language philosophy, seems to break with traditional epistemology. The linguistic analysts undertake to examine the actualization laced upon the way major epistemological terms are used-terms such as knowledge, perception, and probability - and to formulate definitive rules for their use to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was truly added but nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances.

Positivism, is a contained system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte, but some positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and Immanuel Kant.

The keystone of Kant's philosophy, sometimes called critical philosophy, is contained in his Critique of Pure Reason (1781), in which he examined the bases of human knowledge and created an individual epistemology. Like earlier philosophers, Kant differentiated modes of thinking into analytic and synthetic propositions. An analytic proposition is one in which the predicate is contained in the subject, as in the statement “Black houses are houses.” The truth of this type of proposition is evident, because to state the reverse would be to make the proposition self-contradictory. Such propositions are called analytic because truth is discovered by the analysis of the concept itself. Synthetic propositions, on the other hand, are those that cannot be arrived at by pure analysis, as in the statement “The house is black.” All the common propositions that result from experience of the world are synthetic.

Propositions, according to Kant, can also be divided into two other types, empirical and deductive. Empirical propositions depend entirely on sense perception, but a deductive proposition has for itself -, a fundamental validity and is not based on such perception. The difference between these two types of propositions may be illustrated by the empirical “The house is black” and the deductivity “Two plus two makes four.” Kant's thesis in the Critique is that making synthetic speculative judgment is possible. This philosophical position is usually known as transcendentalism. In describing how this type of judgment is possible Kant regarded the objects of the material world as fundamentally unknowable, from the point of view of reason, they serve merely as the raw material from which sensations are formed. Objects of themselves have no existence, and space and time exists only as part of the mind, as “intuitions” by which perceptions are measured and judged.

Besides these intuitions, Kant stated that several deductive concepts, which he called categories, also exists. He divided the categories into four groups concerning quantity, which are unity, plurality, and totality. Those concerning quality values, for which reality, negation, and limitation, are the concerning relations under which are substance-and-accident, cause-and-effect, and reciprocity, all of these under consideration contend with the concerns of modality, in that they are possibly to explicate upon existence, and necessity. The intuitions and the categories can be applied to make judgments about experiences and perceptions, but cannot, according to Kant, be applied to abstract ideas such as freedom and existence without leading to inconsistencies in the form of coupling incomparable propositions, or “antinomies,” in which both members of each pair can be proven true.

In the Metaphysics of Ethics (1797) Kant described his ethical system, which is based on a belief that the reason is the final authority for morality. Actions of any sort, he believed, must be undertaken from a sense of duty dictated by reason, and no action had rendered for expediency or solely in obedience to law or custom can be regarded as moral. Kant described two types of commands given by reason, the hypothetical imperative, which dictates a given course of action to reach a specific end, and the categorical imperative, which dictates a course of action that must be followed because of its rightness and necessity. The categorical imperative is the basis of morality and was stated by Kant in these words: “Act as if the maxim of your action were to become a vessel through which means were a way of your will and general common law.”

Kant's ethical ideas are a logical outcome of his belief in the fundamental freedom of the individual as stated in his Critique of Practical Reason (1788). This freedom he did not regard as the lawless freedom of anarchy, but as the freedom of a self-government, the freedom to obey consciously the laws of the universe as revealed by reason. He believed that the welfare of each individual should properly be regarded as an end, that the world was progressing toward an ideal society in which reason would “bind every law giver to make his laws so that they could have sprung from the united will of an entire people, and to regard every subject, in as far as he wishes to be a citizen, based on whether he has conformed to that will.” In his treatise Perpetual Peace (1795) Kant advocated the establishment of a world federation of republican states.

Kant had a greater influence than any other philosopher of modern times. Kantian philosophy, particularly as developed by the German philosopher Georg Wilhelm Friedrich Hegel, was the basis on which the structure of Marxism was built; Hegel's dialectical method, which was used by Karl Marx, was an outgrowth of the method of reasoning by “antinomies” that Kant used. The German philosopher Johann Fichte, Kant's pupil, rejected his teacher's division of the world into objective and subjective parts and developed an idealistic philosophy that also had great influence on 19th-century socialists. One of Kant's successors at the University of Königsberg, J.F. Herbart, incorporated some of Kant's ideas in his system of pedagogy.

Besides works on philosophy, Kant wrote many treatises on various scientific subjects, many in the field of physical geography. His most important scientific work was General Natural History and Theory of the Heavens (1755), in which he advanced the hypothesis of the formation of the universe from a spinning nebula hypothesis that later was developed independently by Pierre de Laplace.

Among Kant's other writings are Prolegomena to Any Future Metaphysics (1783), Metaphysical Rudiments of Natural Philosophy (1786), Critique of Judgment (1790), and Religion Within the Boundaries of Pure Reason (1793).

Metaphysics, is the branch of philosophy that is concerned with the nature of ultimate reality. Metaphysic is customarily divided into ontology, which deals with the question of how many fundamentally distinct sorts of entities compose the universe, and metaphysics proper, which is concerned with describing the most general traits of reality. These general traits together define reality and would presumably characterize any universe whatever. Because these traits are not peculiar to this universe, but are common to all possible universes, metaphysics may be conducted at the highest level of abstraction. Ontology, by contrast, because it investigates the ultimate divisions within this universe, is more closely related to the physical world of human experience.

The term metaphysic is believed to have been derived in Rome about 70Bc, with the Greek Peripatetic philosopher Andronicus of Rhodes (flourished 1st century Bc) in his edition of the works of Aristotle. In the arrangement of Aristotle's works by Andronicus, the treatise originally called First Philosophy, or Theology, followed the treatise Physics. Hence, the First Philosophy became known as meta (ta) physica, or “following (the) Physics,” later shortened to Metaphysics. The word took on the connotation, in popular usage, of matters transcending material reality. In the philosophic sense, however, particularly as opposed to the use of the word by occultists, metaphysic apply to all reality and is distinguished from other forms of inquiry by its generality.

The subjects treated in Aristotle's Metaphysics (substance, causality, the nature of being, and the existence of God) fixed the content of metaphysical speculation for centuries. Among the medieval Scholastic philosophers, metaphysics were known as the “transphysical science” on the assumption that, by means of it, the scholar philosophically could make the transition from the physical world to a world beyond sense perception. The 13th-century Scholastic philosopher and theologian St. Thomas Aquinas declared that the cognition of God, through a causal study of finite sensible beings, was the aim of metaphysics. With the rise of scientific study in the 16th century the reconciliation of science and faith in God became an increasingly important problem.

Before the time of Kantian metaphysics that was characterized by a tendency to construct theories based on deductive knowledge, that is, knowledge derived from reason alone, in contradistinctions to empirical knowledge, which is gained by reference to the facts of experience. From deductive knowledge were to signify a deduced general proposition held to be true of all things. The method of inquiry based on deductive principles is known as rationalistic. This method may be subdivided into monism, which holds that the universe is made up of a single fundamental substance; Dualism, may be viewed as the belief in two such substances, as the pluralism for which proposes the existence of several fundamental properties.

The monists, agreeing that only one basic substance exists, differ in their descriptions of its principal characteristics. Thus, in idealistic monism the substance is believed to be purely mental; in materialistic monism it is held to be purely physical, and in neutral monism it is considered neither exclusively mental nor solely physical. The idealistic position was held by the Irish philosopher George Berkeley, the materialistic by the English philosopher Thomas Hobbes, and the neutral by the Dutch philosopher Baruch Spinoza. The latter expounded a pantheistic view of reality in which the universe is identical with God and everything contains God's contention.

George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immaterialist” theory, part of the school of thought known as idealism, to a more general audience in Three Dialogues between Hylas and Philonous (1713).

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of Gottfried Wilhelm Leibniz, the universe is held to consist of many distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from theoretical principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.

It is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the speculative character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories such that experience, is manifested only in experience. Their logic precedes the experience from which of these categories or structural principle’s are made transcendental. They transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, given the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories is radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; Voluntarism, is the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce, for phenomenalism is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer, emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson, and the philosophy of the organism, which is elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief: According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism ‘the Determination of Will’ is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed as to actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity what is Mysticism but an immediate, direct, intuitive knowledge of God or of ultimate reality attained through personal religious experience? Wide variations are found in both the form and the intensity of mystical experience. The authenticity of any such experience, however, is not dependent on the form, but solely on the quality of life that follows the experience. The mystical life is characterized by enhanced vitality, productivity, serenity, and joy as the inner and outward aspects harmonize in union with God.

Daoism (Taoism) emphasizes the importance of unity with nature and of yielding to the natural flow of the universe. This contrasts greatly with Confucianism, another Chinese philosophy, which focuses on society and ethics. The fundamental text of Daoism is traditionally attributed to Laozi, a legendary Chinese philosopher who supposedly lived in the 500s Bc.

Elaborate philosophical theories have been developed in an attempt to explain the phenomena of mysticism. Thus, in Hindu philosophy, and particularly in the metaphysical system known as the Vedanta, the self or atman in man is identified with the supreme self, or Brahman, of the universe. The apparency of separateness and individuality of beings and events are held to be an illusion (Sanskrit maya), or convention of thought and feeling. This illusion can be dispelled through the realization of the essential oneness of atman and Brahman. When the religious initiate has overcome the beginningless, ignorance (Sanskrit avidya) upon which, depends on the apparent separability of subject and objects, of self and no self, a mystical state of liberation, or moksha, is attained. The Hindu philosophy of Yoga incorporates perhaps the most comprehensive and rigorous discipline ever designed to transcend the sense of personal identity and to clear the way for an experience of union with the divine self. In China, Confucianism is formalistic and antimystical, but Daoism, as expounded by its traditional founder, the Chinese philosopher Laozi (Lao-tzu), has a strong mystical emphasis.

The philosophical ideas of the ancient Greeks were predominantly naturalistic and rationalistic, but an element of mysticism found expression in the Orphic and other sacred mysteries. A late Greek movement, Neoplatonism, was based on the philosophy of Plato and shows the influence of the mystery religions. The Muslim Sufi sect embraces a form of theistic mysticism closely resembling that of the Vedanta. The doctrines of Sufism found their most memorable expression in the symbolic works of the Persian poets Mohammed Shams od-Din, better known as Hafiz, and Jalal al-Din Rumi, and in the writings of the Persian al-Ghazali. Mysticism of the pre-Christian period is evidenced in the writings of the Jewish-Hellenistic philosopher Philo Judaeus.

The Imitation of Christ, the major devotional works of medieval German monk Thomas à Kempis, was written more than 500 years ago to aid fellow members of religious orders. The book, simple in language and style, has become one of the most influential works in Christian literature. It is a thoughtful yet practical treatise that guides the reader toward a spiritual union with God through the teachings of Jesus Christ and the monastic qualities of poverty, chastity, and obedience. In this, Kempis urges Christians to live each day as if it might be their last.

Saint Paul was the first great Christian mystic. The New Testament writings’ best known for their deeply mystical emphasis are Paul’s letters and the Gospel of John. Christian mysticism as a system, however, had arisen from Neoplatonism through the writings of Dionysius the Areopagite, or Pseudo-Dionysius. The 9th-century Scholastic philosopher John Scotus Erigena translated the works of Pseudo-Dionysius from Greek into Latin and thus introduced the mystical theology of Eastern Christianity into Western Europe, where it was combined with the mysticism of the early Christian prelate and theologian Saint Augustine.

In the Middle Ages mysticism was often associated with monasticism. Many celebrated mystics are found among the monks of both the Eastern church and the Western church, particularly the 14th-century Hesychasts of Mount Athos in the former, and Saints Bernard of Clairvaux, Francis of Assisi, and John of the Cross in the latter. The French monastery of Saint Victor, near Paris, was an important centre of mystical thought in the 12th century. The renowned mystic and Scholastic philosopher Saint Bonaventure was a disciple of the monks of St. Victor and St. Francis, who derived mysticism directly from the New Testament, without reference to Neoplatonism, remains a dominantly deliberated figure in modern mysticism. Among the mystics of Holland were Jan van Ruysbroeck and Gerhard Groote, the latter a religious reformer and founder of the monastic order known as the Brothers of the Common Life. Johannes Eckhart, called Meister Eckhart, was the foremost mystic of Germany.

Written by an anonymous English monk in the late 14th century, ‘The Cloud of Unknowing’ has been deeply influential in Christian mysticism. The author stressed the need for contemplation to understand and know God, with the goal of experiencing the spiritual touch of God, and perhaps even achieving a type of spiritual union with God here on earth. Encouraging the faithful to meditate as a way of prayer, putting everything but God out of their minds, even if, at first, all they are aware of is a cloud of unknowing.

Other important German mystics are Johannes Tauler and Heinrich Suso, and followers of Eckhart and members of a group called the Friends of God. One of this group wrote the German Theology that influenced Martin Luther. Prominent later figures are to include, Thomas à Kempis, generally regarded as the author of The Imitation of Christ. English mystics of the 14th and 15th centuries include Margery Kempe and Richard Rolle, Walter Hilton, Julian of Norwich, and the anonymous author of The Cloud of Unknowing, an influential treatise on mystic prayer.

Several distinguished Christian mystics have been women, notably Hildegard of Bingen, Saint Catherine of Siena, and Saint Teresa of Ávila. The 17th-century French mystic Jeanne Marie Bouvier de la Motte Guyon delivered a naturalized mystical doctrine of quietism to France.

Sixteenth-century Spanish mystic and religious reformer Saint Teresa of Ávila’s books on prayer and contemplation frequently dealt with her intense visions of God. Her autobiography, The Life of Saint Teresa of Ávila, written in the 1560s, is frank and unsophisticated in style, and its vocabulary and theology is accessible to the everyday reader. Through this, Teresa described the physical and spiritual sensations that accompanied her religious raptures.

By its pursuit of spiritual freedom, sometimes at the expense of theological formulas and ecclesiastical discipline, mysticism may have contributed to the origin of the Reformation, although it inevitably disagreed with Protestant, as it had with Roman Catholic, religious authorities. The Counter Reformation inspired the Spiritual Exercises of Saint Ignatius of Loyola. The Practice of the Presence of God by Brother Lawrence was a classic French work of a later date. The most notable German Protestant mystics were Jakob Boehme, author of Mysterium Magnum (The Great Mystery), and Kaspar Schwenkfeld. Mysticism finds expression in the theology of many Protestant denominations and is a salient characteristic of such sects as the Anabaptists and the Quakers.

New England, Congregational divine, Jonathan Edwards, exhibited a strong mystical tendency, and the religious revivals that began in his time, and spread throughout the United States during the 19th century derived much of their peculiar power from the assumption of mystical principles, great emphasis being placed on heightened feeling as a direct intuition of the will of God. Mysticism manifested itself in England in the works of the 17th-century Cambridge Platonists: In those of devotional writer William Law, author of the Serious Call to a Devout and Holy Life, and in the Art and Poetry of William Blake.

Religious Revivals, by its term is widely used among Protestants since the early 18th century to denote periods of marked religious interest. Evangelistic preaching and prayer meetings, frequently accompanied by intense emotionalism, are characteristic of such periods, which are intended to renew the faith of church members and to bring others to profess their faith openly for the first time. By an extension of its meaning, the term is sometimes applied to various important religious movements of the past. Instances are recorded in the Scriptures as occurring both in the history of the Jews and in the early history of the Christian church. In the Middle Ages revivals took place concerning the Crusades and under the charge of the monastic orders, sometimes with strange adjuncts, as often happens with the Flagellants and the dancing mania. The Reformation of the 16th century was also accompanied by revivals of religion.

It is more accurate, however, to limit the application of the term revival to the history of modern Protestantism, especially in Britain and the United States where such movements have flourished with unusual vigour. The Methodist churches originated from a widespread evangelical movement in the first half of the 18th century. This was later called the Wesleyan movement or Wesleyan revival. The Great Awakening was the common designation for the revival of 1740-42 that took place in New England and other parts of North America under the Congregational clergyman Joseph Bellamy, and three Presbyterian clergymen, Gilbert Tennent, William Tennent, and their father, the educator William Tennent. Both Princeton University and Dartmouth College had their origin in this movement. Toward the end of the 18th century a fresh series of revivals began in America, lasting intermittently from 1797 to 1859. In New England the beginning of this long period was called the evangelical reawakening.

Churches soon came to depend upon revivals for their growth and even for their existence, and, as time went on, the work was also taken up by itinerant preachers also called circuit riders. The early years of the 19th century were marked by great missionary zeal, extending even to foreign lands. In Tennessee and Kentucky, encampment conventions, great open-air assemblies, began about 1800AD to play an important part in the evangelical work of the Methodist Church, now the United Methodist Church. One of the most notable products of the camp meeting idea was the late 19th-century Chautauqua Assembly, a highly successful educational endeavour. An outstanding religious revival of the 19th century was the Oxford movement (1833-45) in the Church of England, which resulted in the modern English High Church movement. Distinctly a revival, it was of a type different from those of the two preceding centuries. The great American revival of 1859-61 began in New England, particularly in Connecticut and Massachusetts, and extended to New York and other states. It is believed that in a single year half a million converts were received into the churches. Another remarkable revival, in 1874-75, originated in the labours of the American evangelists Dwight L. Moody and Ira D. Sankey. Organized evangelistic campaigns have sometimes had great success under the leadership of professional evangelists, among them Billy Sunday, Aimee Semple McPherson, and Billy Graham. The Salvation Army carries on its work largely by revivalistic methods.

American religious writer and poet Thomas Merton joined a monastery in 1941 and was later ordained as a Roman Catholic priest. He is known for his autobiography, The Seven Storey Mountains, which was published in 1948.

The 20th century has experienced a revival of interest in both Christian and non-Christian mysticism. Early commentators of note were Austrian Roman Catholic Baron Friedrich von Hügel, British poet and writer Evelyn Underhill, American Quaker Rufus Jones, the Anglican prelate William Inge, and German theologian Rudolf Otto. A prominent nonclerical commentator was American psychologist and philosopher William James in The Varieties of Religious Experience (1902).

At the turn of the century, American psychologist and philosopher William James gave a series of lectures on religion at Scotland’s University of Edinburgh. In the twenty lectures he delivered between 1901 and 1902, published together as The Varieties of Religious Experience (1902), James discussed such topics as the existence of God, religious conversions, and immortality. In his lectures on mysticism. James defined the characteristics of a mystical experience - a state of consciousness in which God is directly experienced. He also quoted accounts of mystical experiences as given by important religious figures from many different religious traditions.

In non-Christian traditions, the leading commentator on Zen Buddhism was Japanese scholar Daisetz Suzuki; on Hinduism, Indian philosopher Sarvepalli Radhakrishnan; and on Islam, British scholar R. A. Nicholson. The last half of the 20th century saw increased interest in Eastern mysticism. The mystical strain in Judaism, which received particular emphasis in the writings of the Kabbalists of the Middle Ages and in the Hasidism movement of the 18th century, was again pointed up by the modern Austrian philosopher and scholar Martin Buber. Mid-20th-century mystics of note included French social philosopher Simone Weil, French philosopher Pierre Teilhard de Chardin, and American Trappist monk Thomas Merton.

Comte chose the word positivism on the ground that it showed the “reality” and “constructive tendency” that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus controls of natural forces. The two primary components of positivism, the philosophy and the polity (or a program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. Many of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.

In the early 20th century British mathematician and philosopher Bertrand Russell, along with British mathematician and philosopher Alfred North Whitehead, attempted to prove that mathematics and numbers can be understood as groups of concepts, or set classifications. Russell and Whitehead tried to show that mathematics is closely related to logic and, in turn, that ordinary sentences can be logically analysed using mathematical symbols for words and phrases. This idea resulted in a new symbolic language, used by Russell in a field he termed philosophical logic, in which philosophical propositions were reformulated and examined according to his symbolic logic.

During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group became known as logical positivists, and it included the Austrian Ludwig Wittgenstein and Bertrand Russell and G.E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus (1921, German-English parallels texts, 1922) that proved to be of a decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter of logical necessity.

Philosophy, for Moore, was basically a two-fold activity. The first part involves analysis, that is, the attempt to clarify puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Moore was perplexed, for example, by the claim of some philosophers that time is unreal. In analysing this assertion, he maintained that the proposition “time is unreal” was logically equivalent, as, “there are no temporal facts.” (“I read the article yesterday” is an example of a temporal fact.) Once the meaning of an assertion containing the problematic concept is clarified, the second task is to determine whether justifying reasons exist for believing the assertion. Moore's diligent attention to conceptual analysis for achieving clarity established him as one of the founders of the contemporary analytic and linguistic emphasis in philosophy.

Moore's most famous work, Principia Ethica (1903), contains his claim that the concept of good refers to a simple, unanalyzable, indefinable quality of things and situations. It is a nonnatural quality, for it is apprehended not by sense experience but by a kind of moral intuition. The quality goodness is evident, argued Moore, in such experiences as friendship and aesthetic enjoyment. The moral concepts of right and duty are then analysed as to producing whatever possesses goodness.

Several of Moore's essays, including “The Refutation of Idealism” (1903), contributed to developments in modern philosophical realism. An empiricist in his approach to knowledge, he did not identify experience with sense experience, and he avoided the skepticism that often accompanies empiricism. He came to the defence of the common-sense point of view that suggests that an experience result in knowledge of an external world independent of the mind.

Moore also wrote Ethics (1912), Philosophical Studies (1922), and Philosophical Papers (1959) and edited (1921-47) Mind, a leading British philosophical journal.

Nonetheless, language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into fewer complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or ‘states of affairs’. He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different dynamic functions, so linguistic expressions serve many foundational functional structures as bound akin the stability of fundamental linguistics. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood concerning its context, that is, for the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

The positivists today, who have rejected this so-called Vienna school of philosophy, prefer to call themselves logical empiricists to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle it is philosophically unverifiable positivism, is a contained system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

Positivism is the system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge.

The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte, but some positivist ideas may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the theoretical character of the structural principles of this empirical knowledge.

Although, principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy. Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

All the same, in that, these theoretical principles as were structurally given are those contained or restricted by their measure through which we discover that the philosopher John Locke (1632-1704), was that he founded the school of empiricism. Under which of his understanding, Locke explained his theory of empiricism, a philosophical doctrine holding that all knowledge is based on experience, in An Essay Concerning Human Understanding (1690). Locke believed the human mind to be a blank slate at birth that gathered all its information from its surroundings - starting with simple ideas and combining these simple ideas into more complex ones. His theory greatly influenced education in Great Britain and the United States. Locke believed that education should begin in early childhood and should proceed gradually as the child learns increasingly complex ideas.

Locke was born in the village of Wrington, Somerset, on August 29, 1632. He was educated at the University of Oxford and lectured on Greek, rhetoric, and moral philosophy at Oxford from 1661 to 1664. In 1667 Locke began his association with the English statesman Anthony Ashley Cooper, 1st earl of Shaftesbury, to whom Locke was friend, adviser, and physician. Shaftesbury secured for Locke a series of minor government appointments. In 1669, in one of his official capacities, Locke wrote a constitution for the proprietors of the Carolina Colony in North America, but it was never put into effect. In 1675, after the liberal Shaftesbury had fallen from favour, Locke went to France. In 1679 he returned to England, but in view of his opposition to the Roman Catholicism favoured by the English monarchy at that time, he soon found it expedient to return to the Continent. From 1683 to 1688 he lived in Holland, and following the so-called Glorious Revolution of 1688 and the restoration of Protestantism to favour, Locke returned once more to England. The new king, William III, appointed Locke to the Board of Trade in 1696, a position from which he resigned because of ill health in 1700. He died in Oates on October 28, 1704.

The ideas of 17th-century English philosopher and political theorists John Locke greatly influenced modern philosophy and political thought. Locke, who is best known for establishing the philosophical doctrine of empiricism, was criticized for his “atheistic” proposition that morality is not innate within human beings. However, Locke was a religious man, and the influence of his faith was overlooked by his contemporaries and subsequent readers. Author John Dunn explores the influence of Locke’s Anglican beliefs on works such as An Essay Concerning Human Understanding (1690).

Locke's empiricism emphasizes the importance of the experience of the senses in pursuit of knowledge than intuitive speculation or deduction. The empiricist doctrine was first expounded by the English philosopher and statesman Francis Bacon early in the 17th century, but Locke gave it systematic expression in his Essay Concerning Human Understanding (1690). He regarded the mind of a person at birth as a tabula rasa, a blank slate upon which experience imprinted knowledge, and did not believe in intuition or theories of innate conceptions. Locke also held that all persons are born good, independent, and equal.

English philosopher John Locke anonymously published his Treatises on Government (1690) the same year as his famous Essay Concerning Human Understanding. In the Second Treatise, Locke described his concept of a ‘civil government’. Locke excluded absolute monarchy from his definition of civil society, because he believed that the people must consent to be ruled. This argument later influenced the authors of the Declaration of Independence and the Constitution of the United States.

Locke's views, in his Two Treatises of Government (1690), attacked the theory of divine right of kings and the nature of the state as conceived by the English philosopher and political theorist Thomas Hobbes. In brief, Locke argued that sovereignty did not reside in the state but with the people, and that the state is supreme, but only if it is bound by civil and what he called ‘natural’ law. Many of Locke's political ideas, such as that relating to natural rights, property rights, the duty of the government to protect these rights, and the rule of the majority, were later embodied in the U.S. Constitution.

Locke further held that revolution was not only a right but often an obligation, and he advocated a system of checks and balances in government. He also believed in religious freedom and in the separation of church and state.

Locke's influence in modern philosophy has been profound and, with his application of empirical analysis to ethics, politics, and religion, he remains one of the most important and controversial philosophers of all time. Among his other works are Some Thoughts Concerning Education (1693) and The Reasonableness of Christianity (1695).

In accord with empirical knowledge it is found that pragmatism, is an aligned to a philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there is absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in understanding and practicality and an equally American distrust of abstract theories and ideologies.

American psychologist and philosopher William James helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truth is what work, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

The Association for International Conciliation first published William James’s pacifist statement, “The Moral Equivalent of War,” in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism of which was a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represents standards of the time.

Pragmatists regarded all theories and institutions as tentative hypotheses and solutions, and for this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalisms, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists, moreover, were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosopher’s Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle’, for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example, - are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for anyone philosophy to explain everything.

Dewey’s philosophy can be described as a foundational version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. For pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatists’ tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - as an alternative to Rorty’s interpretation of the tradition.

In an ever-changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and reject’s dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

It may prove fitting to turn tables of a direction that inclines by inclination some understanding of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, who negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism opposing Kant's critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; Emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson. The philosophy of the organism, elaborated by Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; According to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the teachings of voluntarism may obtainably presuppose that Will is theoretically equal to postulates as they are the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analysed as to actual or possible occurrences, or phenomena, and that anything that cannot be analysed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and intuitive creativity.

Comte chose the word positivism on the ground that it suggested the ‘reality’ and ‘constructive tendency’ that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus controls of natural forces. The two primary components of positivism, the philosophy and the polity (or a program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship.

In response to the scientific, political, and industrial revolution of his day, Comte was fundamentally concerned with an intellectual, moral, and political reorganization of the social order. Adoption of the scientific attitude was the key, he thought, to such a reconstruction.

Comte, also, argued that an empirical study of historical processes, particularly of the progress of the various interrelated sciences, reveals a law of three stages that govern human development. He analysed these stages in his major work, the six-volume Course of Positive Philosophy (1830-42, which was translated by 1853). Because of the nature of the human mind, each science or branch of knowledge passes through “three different theoretical states: the theological or fictitious state; The metaphysical or abstract state; and, lastly, the scientific or positive state.” At the theological stage, events are immaturely explained by appealing to the will of the gods or of God. At the metaphysical stage phenomena are explained by appealing to abstract philosophical categories. The final evolutionary stage, the scientific, involves relinquishing any quest for absolute explanations of causes. Attention is focussed altogether on how phenomena are related, with the aim of arriving at generalizations subject to observational verification. Comte's work is considered as the classical expression of the positivist attitude - namely, that the empirical sciences are the only adequate source of knowledge.

Although Kant rejected belief in a transcendent being, Comte recognized the value of religion in contributing to social stability. In his four-volume System of Positive Polity, 1851-54 and translated, 1875-77, he proposed his religion of humanity, aimed as the presentation to socially beneficial behaviour. Comte's chief significance, however, derives from his role in the historical development of positivism.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or a conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”

This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, about the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group became known as logical positivists, and it included the Austrian Ludwig Wittgenstein and the British Bertrand Russell and G.E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus, 1921; German-English parallels text, 1922, that proved to be of decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter

The positivists today, who have rejected this so-called Vienna school of philosophy, prefer to call themselves logical empiricists to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle itself is philosophically unverifiable.

Edmund Husserl inherited his view from Brentano, that the central problem in understanding thought is that of explaining the way in which an intentional direction, or content, can belong to the mental phenomenon that exhibits it. What Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, besides the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect. Such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. After, the Méditations Cartésiennes (1931, Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.

Edmund Husserl is considered the founder of phenomenology. This 20th-century philosophical movement is dedicated to the description of phenomena as they present themselves through perception to the conscious mind.

Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie, 1913 Ideas: A General Introduction to Pure Phenomenology, 1931. Early followers of Husserl such as German philosopher Max Scheler, was influenced by his previous book, Logische Untersuchungen, two volumes, 1900 and 1901, Logical Investigations, 1970, claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structural foundations are the proper Objectifies of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent with real objects, Husserl recognized that phenomenological reflection does not really presuppose that of anything that exists, but amounts to a ‘bracketing of existence’- that is, setting aside the question of the real existence of the meditated objective.

Husserl argued against his early position, which he called psychologies, in Logical Investigations, 1900-1901 and translated, 1970. In this book, regarded as a radical departure in philosophy, he contended that the philosopher's task is to contemplate the essences of things, and that the essence of an object can be arrived at by systematically varying that object in the imagination. Husserl noted that consciousness is always directed toward something. He called this directedness intentionality and argued that consciousness contains ideal, unchanging structures called meanings, which determine what object the mind is directed toward at any given time.

During his tenure (1901-1916) at the University of Göttingen, Husserl attracted many students, who began to form a distinct phenomenological school, and he wrote his most influential work, Ideas: A General Introduction to Pure Phenomenology, 1913; and translated 1931. In this book Husserl introduced the term phenomenological reduction for his method of reflection on the meanings the mind employs when it contemplates an object. Because this method concentrates on meanings that are in the mind, whether or not the object present to consciousness actually exists, he proceeded to give detailed analyses of the mental structures involved in perceiving particular types of objects, describing in detail, for instance, his perception of the apple tree in his garden. Thus, although phenomenology does not assume the existence of anything, it is nonetheless a descriptive discipline; according to Husserl, phenomenology is devoted, not to inventing theories, but rather to describing the “things themselves.”

After 1916 Husserl taught at the University of Freiburg. Phenomenology had been criticized as an essentially solipistic method, confining the philosopher to the contemplation of private meanings, so in Cartesian Meditations, 1931 and translated, 1960, Husserl attempted to show how the individual consciousness can be directed toward other minds, society, and history. Husserl died in Freiburg on April 26, 1938.

Husserl's phenomenology had a great influence on a younger colleague at Freiburg, Martin Heidegger, who developed existential phenomenology, and Jean-Paul Sartre and French existentialism. Phenomenology remains one of the most vigorous tendencies in contemporary philosophy, and its impact has also been felt in theology, linguistics, psychology, and the social sciences.

What is more, Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, beyond the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect. Such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. Successively, in Méditations Cartésiennes (1931, Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.

Phenomenology attempts to describe reality as for pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenological attempts in philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl, and German philosopher Martin Heidegger published Sein und Zeit (Being and Time) in 1927, an effort to describe the phenomenon of being by considering the full scope of existence.

All phenomenologists follow Husserl in attempting to use pure description. Thus, they all subscribe to Husserl's slogan ‘To the things themselves’. They differ among themselves, however, whether the phenomenological reduction can be realized, and what is manifest to the philosopher as giving a pure description of experience. Martin Heidegger, Husserl's colleague and most brilliant of critics, claimed that phenomenology must necessitate the essential manifestations what is hidden or perhaps underlying to cause among the

ordinary, everyday experience. He therefore, endeavoured within Being and Time, to describe what he called the structure of everydayness, or being-in-the-world, which he found an interconnected system of equipment, social roles, and purposes.

Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “Being,” which was first expounded in his major work Being and Time.

Accountably of Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

In the mid-1900s, French existentialist Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and free consciousness. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologists, in that he denies the possibility of bracketing existence.

Phenomenology has had a pervasive influence on 20th-century thought. Phenomenological versions of theology, sociology, psychology, psychiatry, and literary criticism have been developed, and phenomenology remains one of the most important schools of contemporary philosophy.

Phenomenology attempts to describe reality as for pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenology attemptively afforded through the efforts established in philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl (known as the founder of phenomenology),

Husserl's colleague and most brilliant of critics, claimed that phenomenology ought be effectually manifested in what is hidden in ordinary, everyday experience. He thus attempted in Being and Time, to describe what he called the structure of everydayness, or being-in-the-world, which he found an interconnected system of equipment, social roles, and purposes.

German philosopher Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or ‘Being’, which was first expounded in his major work Being and Time.

Besides Husserl, Heidegger was especially influenced by the pre-Socratics, by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of ‘Being’ human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural and natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.

The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.

After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of Being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning - a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.

Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean-Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers’ Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English-speaking countries worldwide.

Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being given off into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

Like Heidegger and Sartre, Merleau-Ponty Maurice (1908-1961), A French existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment, 1942 translated, 1963, an interpretative analysis of behaviourism. His major work, Phenomenology of Perception, 1945 and translated 1962, is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism -the view that the working of the human mind can be understood under rules or programs. It is also a telling on the critique of the existentialism of his contemporary, Jean-Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our characterization.

Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus, 1921 translated, 1922, a work he then believed provided the “solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations, published, Posthumously 1953, and translated 1953. Wittgenstein retired in 1947, he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.

Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”

Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into fewer complex facts until one arrives at simple. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science - are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.

Wittgenstein came to believe, nonetheless, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one looks to see how language is used, the variety of linguistic usage becomes clear. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood as to its context, that is, for the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.

Phenomenology attempts to describe reality as pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenology attempts of philosophical than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl and the German philosopher Martin Heidegger published Being and Time, in 1927, an effort to describe the phenomenon of being by considering the full scope of existence.



German philosopher Martin Heidegger strongly influenced the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers.

Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible. Because human action consists of a direct grasp of objects, positing a special mental entity called a meaning to account for intentionality is not necessary. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analysed by Husserl.

Consciousness, is the latest development of the organic and so what is most unfinished and unstrong. It was in 1882, the year and publication of The Gay Science. Yet, the domination with which several times he spoke against antisemitism. Although overlooking Wagner’s antisemitism during the period in which he idealized him was easy for him, when Wagner gained wider public and his antisemitism became more intense did forcefully condemn him for it. In the Gay Science Nietzsche wrote that ‘Wagner is Schopenhauerian in his hatred of the Jews to whom he is not able to do justice even when it comes to their greatest deed, after all, the Jews are the inventors of Christianity; (Recognizing that is important while Nietzsche frequently attacked those forces that led the developments of Christianity and its destructive impact there is no simple condemning. Here Nietzsche is genuinely castigating Wagner, [and Schopenhauer] and recognizing this greatest deed of the Jews, the consequences may have been as deeply as the neurotic creature. Nevertheless, a creature who brought into the world something new and full of promise. In addition, as we can take to consider, in the words of Bernard Williams, ‘Nietzsche’s ever-present sense that his own consciousness would not be possible without the developments that he disliked’.

The problem with consciousness lies at work who of the scientists has long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness be a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon Jame’s work. In an article dated from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of a DNA, and fellow biophysicist Christof Koch explains how experiments on vision might deepen our understanding of a sensible characterization of consciousness.

States of Consciousness., are no simple, agreed-upon definition of consciousness exists? Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused.

Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.

The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the Psycho-physical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.

The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.

By the 1920s, however, a remarkable revolution had occurred in psychology that was essentially to remove considerations of consciousness from psychological research for some 50 years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behaviour, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.

Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. Much of the surge in sleep and dream research was directly fuelled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness (see Dreaming; Sleep).

During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.

Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now been largely demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.

Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline (see Peyote); and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.

Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.

The concept of a direct, simple linkage between environment and behaviour became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored (see Memory). An entirely new area called cognitive psychology has emerged that centres on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.

The overwhelming question in neurobiology today is the relation between the mind and the brain. Everyone agrees that what we know as mind is closely related to certain aspects of the behaviour of the brain, not to the heart, as Aristotle thought. Its most mysterious aspect is consciousness or awareness, which can take many forms, from the experience of pain to self-consciousness. In the past the mind (or soul) was often regarded, as it was by Descartes, as something immaterial, separate from the brain but interacting with it in some way. A few neuroscientists, such as Sir John Eccles, still assert that the soul is distinct from the body. Nonetheless, most neuroscientists now believe that all aspects of mind, including its most puzzling attribute. Consciousness or awareness is likely to be explainable in a more materialistic way as the behaviour of large sets of interacting neurons. As William James, the father of American psychology, said a century ago, consciousness is not a thing but a process.

Exactly what the process is, as, yet, to be discovered. For many years after James penned The Principles of Psychology, consciousness was a taboo concept in American psychology because of the dominance of the behaviorist movement. With the advent of cognitive science in the mid-1950s, it became possible again for psychologists to consider mental processes as opposed to merely observing behaviour. In spite of these changes, until recently most cognitive scientists ignored consciousness, as did most neuroscientists. The problem was felt to be either purely ‘philosophical’ or too elusive to study experimentally. Getting a grant just to study consciousness would not have been easy for a neuroscientist.

Such timidity is ridiculous, so to think about how best to attack the problem scientifically, may be in how to explain mental events as caused by the firing of large sets of neurons? Although there are those who believe such an approach is hopeless, however, worrying too much over aspects of the problem that cannot be solved scientifically is not productive or, more precisely, that it cannot be solved solely by using existing scientific ideas. Radically new concepts may be needed to recall, and the modifications of scientific thinking may be forced upon us by quantum mechanics. Seemingly, the only sensible approach is to press the experimental attack until we are confronted with dilemmas that call for new ways of thinking.

There are many possible approaches to the problem of consciousness. Some psychologists feel that any satisfactory theory should try to explain as many aspects of consciousness as possible, including emotion, imagination, dreams, mystical experiences and so on. Although such an all-embracing theory will be necessary over time, it is wiser to begin with the particular aspect of consciousness that is likely to yield most easily. What this aspect may be a matter of personal judgment. Selecting the mammalian visual system because humans are very visual animals and because so much experimental and theoretical work has already been done on it.

Grasping exactly what we need to explain is not easy, and it will take many careful experiments before visual consciousness can be described scientifically. In that, no attempt to define consciousness is of itself the dangers of premature definitions. (If this seems like a copout, try defining the word ‘gene’- you will not find it easy.) Yet the experimental evidence that already exists provides enough of a glimpse of the nature of visual consciousness to guide research.

Visual theorists agree that the problem of visual consciousness is ill-posed. The mathematical term ‘ill posed’ means that additional constraints are needed to solve the problem. Although the main function of the visual system is to perceive objects and events in the world around us, the information available to our naked eyes is not sufficient by itself to provide the brain with its unique interpretation of the visual world. The understanding held within the brain must essential use experience (either its own or that of our distant descendabilities, which is embedded in our genes) to help interpret the information coming into our eyes. An example would be the derivation of the three-dimensional representation of the world from the two-dimensional signals falling onto the retinas of our two eyes or even onto one of them.

Visual theorists also would agree that seeing is a constructive process, one in which the brain has to carry out complex activities (sometimes called computations) to decide which interpretation to adopt of the ambiguous visual input. ‘Computation’ implies that the brain acts to form a symbolic representation of the visual world, with a mapping (in the mathematical sense) of certain aspects of that world onto elements in the brain.

Ray Jackendoff of Brandeis University postulates, as do most cognitive scientists, that the computations carried out by the brain are largely unconscious and that what we become aware of is the result of these computations. However, while the customary view is that this awareness occurs at the highest levels of the computational system, Jackendoff has proposed an intermediate-level theory of consciousness.

What we see, Jackendoff suggests, relates to a representation of surfaces that are directly visible to us, with their outline, orientation, colour, texture and movement. (This idea has similarities to what the late David C. Marr of the Massachusetts Institute of Technology called a 2 ½ dimensional sketch. It is more than a two-dimensional sketch because it conveys the orientation of the visible surfaces. It is less than three-dimensional because depth information is not explicitly represented.) In the next stage this sketch is processed by the brain to produce a three-dimensional representation. Jackendoff argues that we are not usually aware of this three-dimensional representation.

An example may make this process clearer. If you look at a person whose back is turned to you, you can see the back of the head but not the face. Nevertheless, your brain infers that the person has a face. We can deduce as much because if that person turned around and had no face, you would be very surprised.

The viewer - entering representation is that he might correspond to the visible support of the head from which its back-end is usually proven as the observable aid under which that you are vividly aware. What your brain infers about the front would come from some kind of three-dimensional representation. This does not mean that information flows only from the surface representation to the three-dimensional one; It almost flows in both directions. When you imagine the front of the face, what you are aware of is a surface representation generated by information from the three-dimensional model.

Distinguishing it between an explicit and an implicit representation is important. An explicit representation is something symbolized without further processing. An implicit representation contains the same information but requires further processing to make it explicit. The pattern of coloured pixels on a television screen, for example, contains an implicit representation of objects (say, a person's face), but only the dots and their locations are explicit. When you see a face on the screen, there must be neurons in your brain whose firing, in some sense, symbolizes that face.

We call this pattern of firing neurons an active representation. A latent representation of a face must also be stored in the brain, probably as a special pattern of synaptic connections between neurons. For example, you probably have a representation of The Sky Dome in your brain, a representation that is usually inactive. If you do think about the Dome, the representation becomes active, with the relevant neurons firing away.

An object, incidentally, may be represented in more than one way - as a visual image, as a set of words and their related sounds, or even as a touch or a smell. These different representations are likely to interact with one another. The representation is likely to be distributed over many neurons, both locally and more globally. Such a representation may not be as simple and straightforward as uncritical introspection might indicate. There is suggestive evidence, in that it is partly from studying how neurons fire in various parts of a monkey's brain and partly from examining the effects of certain types of brain damage in humans. That these different aspects of a face and of the implications of a face - may be represented in different parts of the brain.

First, there is the representation of a face as a face, two eyes, a nose, a mouth and so on. The neurons involved are usually not too fussy about the exact size or position of this face in the visual field, nor are they very sensitive to small changes in their orientation. In monkeys, there are neurons that respond best when the face is turning in a particular direction, while others are more concerned with the direction in which the eyes are gazing.

Then there are representations of the parts of a face, as separate from those for the face as a whole. What is more, that the implications of seeing a face, such as that person's sex, the facial expression, the familiarity or unfamiliarity of the face, and in particular whose face it is, may each be correlated with neurons firing in other places.

What we are aware of at any moment, in one sense or another, is not a simple matter. It is to suggest, that there may be a very transient form of fleeting awareness that represents only simple features and does not require an attentional mechanism. From this brief awareness the brain constructs a viewer - cantered representation - what we see vividly and clearly - that does require attention. This in turn probably leads to three-dimensional object representations and thence to more cognitive ones.

Representations corresponding to vivid consciousness are likely to have special properties. William James thought that consciousness to involve both attention and short-term memory. Most psychologists today would agree with this view. Jackendoff writes that consciousness is ‘enriched’ by attention, implying that whereas attention may not be essential for certain limited types of consciousness, it is necessary for full consciousness. Yet it is not clear exactly which forms of memory are involved. Is long-term memory needed? Some forms of acquired knowledge are so embedded in the machinery of neural processing that they are almost used in becoming aware of something. On the other hand, there is evidence from studies of brain-damaged patients that the ability to lay down new long-term episodic memories is not essential for consciousness to be experienced.

Imagining that anyone could be conscious is difficult if he or she had no memory whatsoever of what had just happened, even an extremely short one. Visual psychologists talk of iconic memory, which lasts for a fraction of a second, and working memory (such as that used to remember a new telephone number) that lasts for only a few seconds unless it is rehearsed. It is not clear whether both are essential for consciousness. In any case, the division of short-term memory into these two categories may be too crude.

If these complex processes of visual awareness are localized in parts of the brain, which processes are likely to be where? Many regions of the brain may be involved, but it is almost certain that the cerebral neocortex plays a dominant role. Visual information from the retina reaches the neocortex mainly by way of a part of the thalamus (the lateral geniculate nucleus), being of another significant visual pathway, of which the retina is to the superior colliculus, at the top of the brain stem.

The cortex in humans consists of two intricately folded sheets of nerve tissue, one on each side of the head. These sheets are connected by a large tract of about half a billion axons called the corpus callosum. It is well known that if the corpus callosum is cut, as is done for certain cases of intractable epilepsy, one side of the brain is not aware of what the other side is seeing. In particular, the left side of the brain (in a right-handed person) appears not to be aware of visual information received exclusively by the right side. This shows that none of the information required for visual awareness can reach the other side of the brain by travelling down to the brain stem and, from there, back up. In a normal person, such information can get to the other side only by using the axons in the corpus callosum.

A different part of the brain - the hippocampal system - is involved in one-shot, or episodic, memories that, over weeks and months, it passes on to the neocortex. This system is so placed that it receives inputs from, and projects to, many parts of the brain. Thus, one might suspect that the hippocampal system is the essential seat of consciousness. This is not true: Evidence from studies of patients with damaged brains shows that this system is not essential for visual awareness, although naturally a patient lacking one is severely disabled in everyday life because he cannot remember anything that took place more than a minute or so in the past.

In broad terms, the neocortex of alert animals probably acts in two ways. By building on crude and redundant wiring, for which is produced by our genes and by embryonic processes. The neocortex draws on visual and other experience to prolong the ‘filament’ to assimilate itself and create sectional categories (or "features") it can respond to. A new category is not fully created in the neocortex after exposure to only one example of it, although some small modifications of the neural connections may be made.

The second function of the neocortex (at least of the visual part of it) is to respond extremely rapidly to incoming signals. To do so, it uses the categories it has learned and tries to find the combinations of active neurons that, because of its experience, are most likely to represent the relevant objects and events in the visual world at that moment. The formation of such coalitions of active neurons may also be influenced by biases coming from other parts of the brain: For example, signals telling it what best to attend to or high-level expectations about the nature of the stimulus.

Consciousness, as James noted, is always changing. These rapidly formed coalitions occur at different levels and interact to form even broader coalitions. They are transient, lasting usually for only a fraction of a second. Because coalitions in the visual system are the basis of what we see, evolution has seen to it that they form as fast as possible, otherwise, no animal could survive. The brain is impeded in forming neuronal coalitions rapidly because, by computer standards, neurons act very slowly. The brain formally compensates of stabilizing the account for which this relative slowness is partially used through a number of neurons, simultaneously and in parallel, and partly by arranging the system in a roughly hierarchical manner.

If visual awareness at any moment corresponds to sets of neurons firing, then the obvious question is: Where are these neurons located in the brain, and in what way are they firing? Visual awareness is highly unlikely to occupy all the neurons in the neocortex that are firing above their background rate at a particular moment. It would be to expect that, theoretically, at least some of these neurons would be involved in doing computations - trying to arrive at the best coalitions - whereas others would express the results of these computations, in other words, what we see.

Fortunately, some experimental evidence can be found to back up this theoretical conclusion. A phenomenon called binocular rivalry may help identify the neurons whose firing symbolizes awareness. This phenomenon can be seen in dramatic form in an exhibit prepared by Sally Duensing and Bob Miller at the Exploratorium in San Francisco.

Binocular rivalry occurs when each eye has a different visual input relating to the same part of the visual field. The early visual system on the left side of the brain receives an input from both eyes but sees only the part of the visual field to the right of the fixation point. The converse is true for the right side. If these two conflicting inputs are rivalrous, one sees not the two inputs superimposed but first one and then the other, and so given alternatively.

In the exhibit, called "The Cheshire Cat," viewers put their heads in a fixed place and are told to keep the gaze fixed. By means of a suitably a placed mirror, one of the eyes can look at another person's face, directly in front, while the other eye sees a blank white screen to the side. If the viewer waves a hand in front of this plain screen at the same location in his or her visual field occupied by the face, the face is wiped out. The movement of the hand, being visually very salient, has captured the brain's attention. Without attention the face cannot be seen. If the viewer moves the eyes, the face reappears.

In some cases, only part of the face disappears. Sometimes, for example, one eye, or both eyes, will remain. If the viewer looks at the smile on the person's face, the face may disappear, leaving only the smile. For this reason, the effect has been called the Cheshire Cat effect, after the cat in Lewis Carroll's Alice's Adventures in Wonderland.

Although recording activity in individual neurons in a human brain is very difficult, such studies can be done in monkeys. A simple example of binocular rivalry has been studied in a monkey by Nikos K. Logothetis and Jeffrey D. Schall, both then at M.I.T. They trained a macaque to keep its eye’s still and to signal whether it is seeing upward or downward movement of a horizontal grating. To produce rivalry, upward movement is projected into one of the monkey's eyes and downward movement into the other, so that the two images overlap in the visual field. The monkey signals that it sees up and down movements alternatively, just as humans would. Even though the motion stimulus coming into the monkey's eyes is always the same, the monkey's percept changes every second or so.

Cortical area MT (which some researchers prefer to label V5) is an area mainly concerned with movement. What do the neurons in MT do when the monkey's percept is sometimes up and sometimes down? (The researchers studied only the monkey's first response.) The simplified answer - the actual data are more disorganized - is that whereas the firing of some of the neurons correlates with the changes in the percept, for others the average firing rate is unchanged and independent of which direction of movement the monkey is seeing at that moment. Thus, it is unlikely that the firing of all the neurons in the visual neocortex at one particular moment corresponds to the monkey's visual awareness. Exactly which neurons do correspond to awareness remains to be discovered.

Having postulated that when we clearly see something, there must be neurons actively firing that stand for what, we see. This might be called the activity principle. Here, too, there is some experimental evidence. One example is the firing of neurons in a specific cortical visual area in response to illusory contours. Another and perhaps more striking case are the filling in of the blind spot. The blind spot in each eye is caused by the lack of photoreceptors in the area of the retina where the optic nerve leaves the retina and projects to the brain. Its location is about 15 degrees from the fovea (the visual centre of the eye). Yet if you close one eye, you do not see a hole in your visual field.

Philosopher Daniel C. Dennett of Tufts University is unusual among philosophers in that he is interested both in psychology and in the brain. This interest is much to be welcomed. In a recent book, Consciousness Explained, he has argued that talking about filling in is wrong. He concludes, correctly, that "an absence of information is not the same as information about an absence." From this general principle he argues that the brain does not fill in the blind spot but ignores it.

Dennett's argument by itself, however, does not establish that filling in does not occur; it only suggests that it might not. Dennett also states that "your brain has no machinery for [filling in] at this location." This statement is incorrect. The primary visual cortex lacks a direct input from one eye, but normal "machinery" is there to deal with the input from the other eye. Ricardo Gattass and his colleagues at the Federal University of Rio de Janeiro have shown that in the macaque some of the neurons in the blind-spot area of the primary visual cortex do respond to input from both eyes, probably assisted by inputs from other parts of the cortex. Moreover, in the case of simple filling in, some of the neurons in that region respond as if they were actively filling in.

Thus, Dennett's claim about blind spots is incorrect. In addition, psychological experiments by Vilayanur S. Ramachandran have shown that what is filled of a volume in a can be quite complex depending on the overall context of the visual scene. How, he argues, can your brain be ignoring something that is in fact commanding attention?

Filling in, therefore, is not to be dismissed as nonexistent or unusual. It probably represents a basic interpolation process that can occur at many levels in the neocortex. It is, incidentally, a good example of what is meant by a constructive process.

How can we discover the neurons whose firing symbolizes a particular percept? William T. Newsome and his colleagues at Stanford University have done a series of brilliant experiments on neurons in cortical area MT of the macaque's brain. By studying a neuron in area MT, we may discover that it responds best to very specific visual features having to do with motion. A neuron, for instance, might fire strongly in response to the movement of a bar in a particular place in the visual field, but only when the bar is oriented at a certain angle, moving in one of the two directions perpendicular to its length within a certain range of speed.

Exciting just a single neuron is technically difficult, but it is known that neurons that respond to roughly the same position, orientation and direction of movement of a bar tend to be located near one another in the cortical sheet. The experimenters taught the monkey a simple task in movement discrimination using a mixture of dots, some moving randomly, the rest all in one direction. They showed that electrical stimulation of a small region in the right place in cortical area MT would bias the monkey's motion discrimination, almost always in the expected direction.

Thus, the stimulation of these neurons can influence the monkey's behaviour and probably its visual percept. Such exploring experiments do not, only show decisively that the firing of such neurons is the exact neural correlate of the percept. The correlate could be only a subset of the neurons being activated. Or perhaps the real correlate is the firing of neurons in another part of the visual hierarchy that is strongly influenced by the neurons activated in area MT.

These same reservations apply also to cases of binocular rivalry. Clearly, the problem of finding the neurons whose firing symbolizes a particular percept is not going to be easy. It will take many careful experiments to track them down even for one kind of percept.

The purpose of vivid visual awareness is obviously to feed into the cortical areas concerned with the implications of what we see, as from its position, the information shuttles on the one hand to the hippocampal system, to be encoded (temporarily) into long-term episodic memory, and on the other to the planning levels of the motor system. Nevertheless, is it possible to go from a visual input to a behavioural output without any relevant visual awareness?

That such a process can happen is demonstrated by the remarkable class of patients with ‘blind-sight’. These patients, all of whom have suffered damage to their visual cortex, can point with fair accuracy at visual targets or track them with their eyes while vigorously denying seeing anything. In fact, these patients are as surprised as their doctors by their abilities. The amount of information that ‘gets through’, however, is limited: Blind-sight patients have some ability to respond to wavelength, orientation and motion, yet they cannot distinguish a triangle from a square.

It is naturally of great interest to know which neural pathways are being used in these patients. Investigators originally suspected that the pathway ran through the superior colliculus. Recent experiments suggest that a direct but weak connection may be involved between the lateral geniculate nucleus and other visual areas in the cortex. It is unclear whether an intact primary visual cortex region is essential for immediate visual awareness. Conceivably the visual signal in blind-sight is so weak that the neural activity cannot produce awareness, although getting through to the motor system remains strong enough.

Normal-seeing people regularly respond to visual signals without being fully aware of them. In automatic actions, such as swimming or driving a car, complex but stereotypical actions occurred with little, if any, associated visual awareness. In other cases, the information conveyed is either very limited or very attenuated. Thus, while we can function without visual awareness, our behaviour without it is restricted.

Clearly, it takes a certain amount of time to experience a conscious percept. It is tediously difficult to determine just how much time is needed for an episode of visual awareness, but one aspect of the problem that can be demonstrated experimentally is that signals received close together in time are treated by the brain as simultaneous.

A disk of red light is flashed for, say, 20 milliseconds, followed immediately by a 20-millisecond flash of green light in the same place. The subject reports that he did not see a red light followed by a green light. Instead he saw a yellow light, just as he would have if the red and the green light had been flashed simultaneously. Yet the subject could not have experienced yellow until after the information from the green flash had been processed and integrated with the preceding red one.

Experiments of this type led psychologist Robert Efron, now at the University of California at Davis, to conclude that the processing period for perception is about 60 to 70 milliseconds. Similar periods are found in experiments with tones in the auditory system. It is always possible, however, that the processing times may be different in higher parts of the visual hierarchy and in other parts of the brain. Processing is also more rapid in trained, compared with naive, observers.

Because it appears to be involved in some forms of visual awareness, it would help if we could discover the neural basis of attention. Eye movement is a form of attention, since the area of the visual field in which we see with high resolution is remarkably small, roughly the area of the thumbnail at arms’ length. Thus, we move our eyes to gaze directly at an object in order to see it more clearly. Our eyes usually move three or four times a second. Psychologists have shown, however, that there appears to be a faster form of attention that moves around, in some sense, when our eyes are stationary.

The exact psychological nature of this faster attentional mechanism is currently questionable. Several neuroscientists, however, including Robert Desimone and his colleagues at the National Institute of Mental Health, have shown that the rate of firing of certain neurons in the macaque's visual system depends on what the monkey is attending too in the visual field. Thus, attention is not solely a psychological concept; it also has neural correlates that can be observed. A number of researchers have found that the pulvinars, a region of the thalamus, appears to be involved in visual attention. We would like to believe that the thalamus deserve to be called ‘the organ of attention’, but this status has yet to be established.

The major problem is to find what activity in the brain corresponds directly to visual awareness. It has been speculated that each cortical area produces awareness of only those visual features that are ‘columnar’, or arranged in the stack or column of neurons perpendicular to the cortical surface. Thus, the primary visual cortex could code for orientation and area MT for motion. So far experientialists have not found one particular region in the brain where all the information needed for visual awareness appears to come together. Dennett has dubbed such a hypothetical place ‘The Cartesian Theatre’. He argues on theoretical grounds that it does not exist.

Awareness seems to be distributed not just on a local scale, but more widely over the neocortex. Vivid visual awareness is unlikely to be distributed over every cortical area because some areas show no response to visual signals. Awareness might, for example, be associated with only those areas that connect back directly to the primary visual cortex or alternatively with those areas that project into one another's layer four (The latter areas are always at the same level in the visual hierarchy.)

The key issue, then, is how the brain forms its global representations from visual signals. If attention is crucial for visual awareness, the brain could form representations by attending to just one object at a time, rapidly moving from one object to the next. For example, the neurons representing all the different aspects of the attended object could all fire together very rapidly for a short period, possibly in rapid bursts.

This fast, simultaneous firing might not only excite those neurons that symbolized the implications of that object but also temporarily strengthen the relevant synapses so that this particular pattern of firing could be quickly recalled in the form of short-term memory. If only one representation needs to be held in short-term memory, as in remembering a single task, the neurons involved may continue to fire for a period.

A problem arises if being aware of more than one object at absolute measure from the corresponding outlet in time is imperative. If all the attributes of two or more objects were represented by neurons firing rapidly, their attributes might be confused. The colour of one might become attached to the shape of another. This happens sometimes in very brief presentations.

Some time ago Christoph von der Malsburg, now at the Ruhr-Universität Bochum, suggested that this difficulty would be circumvented if the neurons associated with anyone objects all fired in synchrony (that is, if their times of firing were correlated) but out of synchrony with those representing other objects. Recently two groups in Germany reported that there does appear to be correlated firing between neurons in the visual cortex of the cat, often in a rhythmic manner, with a frequency in the 35- to 75-hertz range, sometimes called 40-hertz, or g, oscillation.

Von der Malsburg's proposal prompted to suggest that this rhythmic and synchronized firing might be the neural correlate of awareness and that it might serve to bind together activity concerning the same object in different cortical areas. The matter is still undecided, but at present the fragmentary experimental evidence does little to support such an idea. Another possibility is that the 40-hertz oscillations may help distinguish figures from ground or assist the mechanism of attention.

Are there some particular types of neurons, distributed over the visual neocortex, whose firing directly symbolizes the content of visual awareness? One very simplistic hypothesis is that the activities in the upper layers of the cortex are largely unconscious ones, whereas the activities in the lower layers (layers five and six) mostly correlate with consciousness. We have wondered whether the pyramidal neurons in layer five of the neocortex, especially the larger ones, might play this latter role.

These are the only cortical neurons that project right out of the cortical system (that is, not to the neocortex, the thalamus or the claustrum). If visual awareness represents the results of neural computations in the cortex, one might expect that what the cortex sends elsewhere would symbolize those results. What is more, the neurons in layer five show an unusual propensity to fire in bursts. The idea that layer five neurons may directly symbolize visual awareness is attractive, but it still is too early to tell whether there is anything in it.

Visual awareness is clearly a difficult problem. More work is needed on the psychological and neural basis of both attention and very short-term memory. Studying the neurons when a percept changes, even though the visual input is constant, should be a powerful experimental paradigm. We need to construct neurobiological theories of visual awareness and test their using a combination of molecular, neurobiological and clinical imaging studies.

It is strongly believed that once we have mastered the secret of this simple form of awareness, we may be close to understanding a central mystery of human life: How the physical events occurring in our brains while we think and act in the world relate to our subjective sensations - that is, how the brain relates to the mind.

Afterthought or precept, may that it is that it now seems likely that there are rapid ‘on-line’ systems for stereotyped motor responses such as hand or eye movement. These systems are unconscious and lack memory. Conscious seeing, on the other hand, seems to be slower and more subject to visual illusions. The brain needs to form a conscious representation of the visual scene that it then can use for many different actions or thoughts. Precisely, how all these pathways’ work may by some enacting deliverance as to how they interact is far from clear.

Still, it is probably too early to draw firm conclusions from them about the exact neural correlates of visual consciousness. We have suggested that on theoretical grounds are based the neuroanatomy of the macaque monkey that primates are not directly aware of what is happening in the primary visual cortex, even though most of the visual information flows through it. Although on hypothetical grounds that are supported by some experimental evidence, with the exception, that, it is still controversial.

Let us consider once again, if for example, mindfully you rotate the letter”N” 90 degrees to the right, is a new letter formed? In seeking answers such that to are fractionally vulnerable to the plexuities of the mind’s eye, scientists say, most people conjure up an image in their mind’s eye, mentally ‘look’ at it, add details one a time and describe what they see. They seem to have a definite picture in their heads, but where in the brain are these images formed? How are they generated? How do people ’move things around’ in their imaginations?

Using clues from brain-damaged patients and advanced brain imaging techniques, neuroscientists have now found that the brain uses virtually identical pathways for seeing objects and for imagining them, only it uses these pathways in reverse.

In the process of human vision, a stimulus in the outside world is passed from the retina to the primary visual cortex and then to higher centres until an object or event is recognized. In mental imaging, a stimulus originates in higher centres and is passed down to the primary visual cortex. Where it is recognized.

The implications are beguiling. Scientists say that for the first time they are glimpsing the biological basis for abilities that make some people better at math or art or flying fighter aircraft. They can now explain why imagining oneself shooting baskets like Michael Jordan can improve one’s athletic performance. In a finding that raises troubling questions about the validity of eyewitness testimony, they can show that an imagined object is, to the observer’s brain at least, every bit as real as one that is seen.

“People have always wondered if there are pictures in the brain.” More recently, the debate centred on a specific query: As a form of thought, is mental imagery rooted in the abstract symbols of language or in the biology of the visual system?

The biology arguments are winning converts every day. The new findings are based on the notion that mental capacities like memory, perception, mental imagery, language and thought are rooted in complex underlying structures in the brain. Thus an image held in the mind’s eye has physically than ethereal properties. Mental imagery research has developed apart with research on the human visual system. Each provides clues to the other helping along the forcing out the details of a highly complex system.

Vision is not a single process but the linking of subsystems that process specific aspects of vision. To understand how this works, we are to consider looking at an apple on a picnic table ten feet away. Light reflects off the apple, hits the retina and is sent through nerve fibres to an early visual way station might that we call the visual buffer, here the apple image is literally mapped onto the surface of brain tissue as it appears in space, with high resolution. “You can think of the visual buffer is a screen,” as if it were “A picture can be displayed on the screen from the camera, which are your eyes, or from a videotape recorder, which is your memory.”

In this case, the image of the apple is held on the screen as the visual buffer carries out a variety of other features are examined separately. Still, the brain does not as yet know is seeing an apple. Next, distinct features of the apple are sent to two higher subsystems for further analysis, as they are often referred to as the ‘what’ system and the ‘where’ system. The brain needs to match the primitive apple pattern with memories and knowledge about apples, in that it seeks knowledge from visual memories that are held like videotapes in the brain.

The ‘what’ system, in the temporal lobe, contains cells that are tuned for specific shapes and colours of objects, some respond to red, round objects in an infinite variety of positions, ignoring local space. Thus, the apple could be on a distant tree, on the picnic table or in front of your nose, thereby, it would still stimulate cells tuned for red round objects, which might be apples, beach balls or tomatoes.

The ‘where’ system, in the parietal lobe, contains cells that are tuned to fire when objects are in different locations. If the apple is far away, one set of cells is activated, while another set fires if the apple is close up. Thus the brain has a way of knowing where objects are in space so the body can navigate accordingly.

When cells in the ‘what’ and ‘where’ systems are stimulated, they may combine their signals in yet a higher subsystem where associative memories are stored, such as this systems are like a card file where visual memories, as if held on videotapes, can be looked up and activated. If the signals from the ‘what’ and ‘where’ system finds a good match in associative memory, one is to find the knowable object as an apple. You, however, also know what it tastes and smells like, that it has seeds, that it can be made into our favourite pie and everything else stored in your brain about apples.

However, sometimes, recognition does not occur at the level of associative memory. Because it is far away, the red object on the picnic table could be a tomato or an apple. You are not sure of its identity, and so, another level of analysis kicks in.

This highest level, in the frontal lobe, is where decisions are made, and to use the same analogy, it is like a catalogue for the videotape is in the brain. You look up features about the image to help you identify it. A tomato has a pointed leaf, while an apple has a slender stem. When the apple stem is found at this higher level, the brain decides that it has an apple in its visual field.

Signals are then fired back down through the system to the visual buffer and the apple is recognized. Significantly, every visual information that sends information upstream through nerve fibres also receives information back from that area. Information flows richly in both directions at all times.

Mental imagery is the result of this duality. Instead of a visual stimulus, a mental stimulus activates the system. The stimulus can be anything, including a memory, odour, face, reverie, song or question, for example, you look up the videotape in associative memory for cat, images are based on previously encoded representations of shape, whereby you look up the videotape in associative memory for cat.

When that subsystem initiated, a general image of a cat was mapped out on the screen, or the visual buffer, in the primary visual cortex. It is a tripped-down version of a cat and everyone’s version is different. Its preliminary mapping calls in detail of whether the cat has curved claws? To find out, the mind’s eye shifts attention and goes back to higher subsystems where detailed features are stored. Activating the curved claws tape, then zoom back down to the front paws of the cat and you add them to the cat. Thus each image is built up, a part at a time.

The more complex the image, the more time it takes to conjure it in the visual buffer. On the basis of brain scans with the technique known as positron emission tomography, its estimates that are required range from 75 to 100 thousandths of a second that to add each new part.

The visual system maps imagined objects and scenes precisely, mimicking the real world, in that, you scan it and study it as it was there.

How is this to be? This can be demonstrated when people are asked to imagine objects at different sizes: “Imagine a tiny honeybee.” “What colour is its head?” To do this, people have to take time to zoom in on the bee’s head before they can answer. Conversely, objects can be imagined so that they overflow the visual field. “Imagine walking toward a car,” “It looms larger as you get closer to it. There comes a point where you cannot see the car at once, in that it seems to overflow the screen in your mind’s eye.”

People with brain damage often demonstrate that the visual systems are doing double duty. For example, stroke patients who lose the ability to see colours also cannot imagine colours.

An epilepsy patient experienced a striking change in her ability to imagine objects after her right occipital lobe was removed to reduce seizures. On the same line, before surgery, the woman estimated she would stand, in her mind’s eye, about 14 feet from a horse before it overflowed her visual field. After surgery, she estimated the overflow at 34 feet. Her field of mental imagery was reduced by half.

Another patient underwent to endure the disability affecting his ‘what’ system while his ‘where’ system was intact. If you were to ask him to imagine what colour is the inside of a watermelon, he does not know, and if you press him, he might guess blue. However, if you ask him, is Toronto closer to London than Winnipeg, he answers correctly instantly.

Imaging studies of healthy brains produce similar findings, As when a person is asked to look at and then to imagine an object, the same brain areas are activated. When people add detail to images, they use the same circuits used in vision. Interestingly, people who say they are vivid, as imaginers can show stronger activation of the relevant areas in the brain.

People use imagery in their everyday lives to call up information in memory, at which time, to initiate reason and to learn new skills, the scientists say. It can lead to creativity. Albert Einstein apparently got his first insight into relativity when he imagined chasing after and matching the speed of a beam of light.

It can improve athletic skills. When you see a gifted athlete, move in a particular way, you not how he or she moves, and you can use that information to program your own physiological techniques as to improve everyone. Basically, his brain uses the same representations in the ‘where’ system to help direct actual movements and imagined movements. Thus, refining these representations in imagery will transfer frowardly into the actualized momentum, provided the motions are physically practised.

Humans exhibit vast individual differences in various component’s of mental imaging, which may help explain certain talents and predilection. Fighter pilots, for example, can imagine the rotation of complex objects in a flash, but most people need time to imagine such tasks.

Currently in progress, are studies in the examination of the brains of mathematicians and artists with a new imaging machine that reveals individual difference in the way brains are biologically wired up, still, they are looking to see if people who are good at geometry have different circuitry from those people who are good at algebra.

In a philosophical conundrum arising from the new research, it seems that people can confuse what is real and what is imagined, raising questions about witnesses’ testimony and memory itself.

Meanwhile, in visual perception, must have some really superb mechanistic actions in our favour, yet to be considered is the expectation to expect, if not only for yourself, however, in order to see an object when you have a part of the picture. As these expectations allow you to see an apple, its various fragments can drive the system into producing the image of an apple in your visual buffer. In other words, you prime yourself so much that you particularly can play the apple tape from your memory banks. Thus, people can be fooled by their mind’s eye, justly of imaging a man standing before a frightened store clerk and you quicken to assume a robbery is under way. It is dark and he is in the shadow. Because you expect to see a gun, your thresholds are lowered and you may occasion to run the tape for a gun, even though it is not there. As far as your brain is concerned, it saw a gun, yet it may not have been real.

Luckily, inputs from the eye tend to be much stronger than inputs from imagination, but on a dark night, under certain circumstances, it is easy to be fooled by one’s own brain.

It is amazing that imagination and reality are not confused more often, least of mention, images are fuzzier and less coherent than real memories, and humans are able to differentiate them by how plausible they seem.

Although our new epistemological situation suggests that questions regarding the character of the whole no longer lies within the domain of science, derived from the classical assumption that knowledge of all the constituent parts of a mechanistic universe is equal to knowledge of the whole. This paradigm sanctioned the Cartesian division between mind and world that became a pervasive preoccupation in western philosophy, art, and literature beginning in the seventeenth century. This explains in no small part why many humanists-social scientists feel that science concerns itself only with the mechanisms of physical reality and it, therefore, indifference or hostile to the experience of human subjectivity - the world where a human being with all his or her myriad sensations, feelings, thoughts, values and beliefs and yet, have a life and subside into an ending.

Nevertheless, man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For the reasons that using metaphors and myth to provide will always be necessary ‘comprehensible’ guides to living. In this way, man’s imagination and intellect play vital roles on his survival and evolution.

Notwithstanding, of ethics, differing in transliterations to which underlie such different conceptions of human life as that of the classical Greeks, Christianity and an adherent of Judaism as a religion or the Hebrew’s or the Jewish culture - the Hebrews are a native inhabitants of the ancient kingdom of Judah, least of mention, the lunisolar calendar used by the Jew’s or their culture or their religious mark in the events of the Jewish year, dating the creation of the world at 3761Bc.

Europe owes the Jew no small thanks for making people think more logically and for establishing cleaner intellectual habits - nobody more so than the Germans who are a lamentable déraisonnable [unreasonable] race . . . Wherever Jews have won influence they have taught men to make finer distinctions, more rigorous inferences, and to write in a more luminous and cleanly fashion, their task as ever to bring the people ‘to listen to raison’.

His position is very radical. Nietzsche does not simply deny that knowledge, construed as the adequate representation he the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: He writes that we think we know what we think is useful, and that we can be quite wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that the being not sensible conception of a world independent of human interpretation and of which interpretation would correspond if that were to unite knowledge. He sums up this highly controversial interpretation in The Will to Power: Fact and precisely what there is not, only interpretations.

Perspectivism does not deny that particular views can be like some versions of contemporary anti-realism, and it attributes of a specific approach in truth as fact relationally determines it and justly untenable by those approaches in themselves. Still, it refuses to envisage a single independent set of facts, to be accounted for all theories, thus Nietzsche grants the truth of specific scientific theories, it does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’, neither the fact’s science addresses nor methods it employs are privileged. Scientific theories serve the purpose for which they have been devised, but they have no priority over the many other purposes of human life.

For those curiously consigned by the uncanny representations brought-about to the affectual nature between mind and mental as drive of both Freud and Nietzsche will soon attest to some uncanny theories.

In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. Freud, recounts the early resistance to his ideas and later acceptance of his work. From the outset of a psychoanalysis, Freud attracted followers, many of whom later proposed competing theories. As a group, these neo-Freudians shared the assumption that the unconscious plays and important role in a person’s thoughts and behaviours. Most parted company with Freud, however, over his emphasis on sex as a driving force. For example, Swiss psychiatrist Carl Jung theorized that all humans inherit a collective unconscious that contains universal symbols and memories from their ancestral past. Austrian physician Alfred Adler theorized that people are primarily motivated to overcome inherent feelings of inferiority. He wrote about the effects of birth order in the family and coined the term sibling rivalry. Karen Horney, a German-born American psychiatrist, argued that humans have a basic need for love and security, and become anxious when they feel isolated and alone.

Motivated by a desire to uncover unconscious aspects of the psyche, psychoanalytic researchers devised what is known as projective tests. A projective test asks people to respond to an ambiguous stimulus such as a word, an incomplete sentence, an inkblot, or an ambiguous picture. These tests are based on the assumption that if a stimulus is vague enough to accommodate different interpretations, then people will use it to project their unconscious needs, wishes, fears, and conflicts. The most popular of these tests are the Rorschach Inkblot Test, which consists of ten inkblots, and the Thematic Apperception Test, which consists of drawings of people in ambiguous situations.

Psychoanalysis has been criticized on various grounds and is not as popular as in the past. However, Freud’s overall influence on the field has been deep and lasting, particularly his ideas about the unconscious. Today, most psychologists agree that people can be profoundly influenced by unconscious forces, and that people often have a limited awareness of why they think, feel, and behave as they do.

The techniques of psychoanalysis and much of the psychoanalytic theory based on its application were developed by Sigmund Freud. Squarely, his work concerning the structure and the functioning of the human mind had far-reaching significance, both practically and scientifically, and it continues to influence contemporary thought.

The first of Freud's innovations was his recognition of unconscious psychiatric processes that follow laws differently from those that govern conscious experience. Under the influence of the unconscious, thoughts and feelings that belong together may be shifted or displaced out of context; two disparate ideas or images may be condensed into one; thoughts may be dramatized in images rather than expressed as abstract concepts; and certain objects may be represented symbolically by images of other objects, although the resemblance between the symbol and the original object may be vague or farfetched. The laws of logic, indispensable for conscious thinking, do not apply to these unconscious mental productions.

Recognition of these modes of operation in unconscious mental processes made possibly the understanding of such previously hard to grasp psychological phenomena as dreaming. Through analysis of unconscious processes, Freud saw dreams as serving to protect sleep against disturbing impulses arising from within and related to early life experiences. Thus, unacceptable impulses and thoughts, called the latent dream content, are transformed into a conscious, although no longer immediately comprehensible, experience called the manifest dream. Knowledge of these unconscious mechanisms permits the analyst to reverse the so-called dream work, that is, the process by which the latent dream is transformed into the manifest dream, and through dream interpretation, to recognize its underlying meaning.

A basic assumption of Freudian theory is that the unconscious conflicts involve instinctual impulses, or drives, that originate in childhood. As these unconscious conflicts are recognized by the patient through analysis, his or her adult mind can find solutions that were unattainable to the immature mind of the child. This depiction of the role of instinctual drives in human life is a unique feature of Freudian theory.

According to Freud's doctrine of infantile sexuality, adult sexuality is a product of a complex process of development, beginning in childhood, involving a variety of body functions or areas (oral, anal, and genital zones), and corresponding to various stages in the relation of the child to adults, especially to parents. Of crucial importance is the so-called Oedipal period, occurring at four to six years of age, because at this stage of development the child for the first time becomes capable of an emotional attachment to the parent of the opposite sex that is similar to the adult's relationship to a mate; the child simultaneously reacts as a rival to the parent of the same sex. Physical immaturity dooms the child's desires to frustration and his or her first step toward adulthood to failure. Intellectual immaturity further complicates the situation because it makes children afraid of their own fantasies. The extent to which the child overcomes these emotional upheavals and to which these attachments, fears, and fantasies continue to live on in the unconscious greatly influences later life, especially loves relationships.

The conflicts occurring in the earlier developmental stages are no less significant as a formative influence, because these problems represent the earliest prototypes of such basic human situations as dependency on others and relationship to authority. Also, basic in moulding the personality of the individual is the behaviour of the parents toward the child during these stages of development. The fact that the child reacts, not only to objective reality, but also to fantasy distortions of reality, and, however, greatly complicates even the best-intentioned educational efforts.

The effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies ‘Triebe’, which literally means “drives,” but is often inaccurately translated as “instincts” to show their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be caused is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. To fulfill its function of adaptation, or reality testing, the ego can enforce the postponement of satisfaction of the instinctual impulses originating in the id. To defend it against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishments of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only from a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. All these demands and prohibitions are the major content of the third system, the superego, the function of which is to control the ego according to the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can cause feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

A cornerstone of modern psychoanalytic theory and practice is the concept of anxiety, which makes appropriate mechanisms of defence against certain danger situations. These danger situations, as described by Freud, are the fear of abandonment by or the loss of the loved one (the object), the risk of losing the object's love, the danger of retaliation and punishment, and, finally, the hazard of reproach by the superego. Thus, symptom formation, character and impulse disorders, and perversions, and sublimations, represent compromise formations - different forms of an adaptive integration that the ego tries to achieve through essentially successfully reconciling the different conflicting forces in the mind.

Various psychoanalytic schools have adopted other names for their doctrines to show deviations from Freudian theory.

Carl Gustav Jung, one of the earliest pupils of Freud, eventually created a school that he preferred to call analytical psychology. Like Freud, Jung used the concept of the libido; However, to him it meant not only sexual drives, but a composite of all creative instincts and impulses and the entire motivating force of human conduct. According to his theories, the unconscious is composed of two parts, the personal unconscious, which contains the results of the completion by the individualities as characterologically is the entity of the experience, and the collective unconscious, the reservoir of the experience of the human race. In the collective unconscious exist many primordial images, or archetypes, common to all individuals of a given country or historical era. Archetypes take the form of bits of intuitive knowledge or apprehension and normally exist only in the collective unconscious of the individual. When the conscious mind contains no images, however, as in sleep, or when the consciousness is caught off guard, the archetypes commence to function. Archetypes are primitive modes of thought and tend to personify natural processes as to such mythological concepts as good and evil spirits, fairies, and dragons. The mother and the father also serve as prominent archetypes.

An important concept in Jung's theory is the existence of two basically different types of personality, mental attitude, and function. When the libido and the individual's general interest are turned outward toward people and objects of the external world, he or she is said to be extroverted. When the reverse is true, and libido and interest are entered on the individual, he or she is said to be introverted. In a completely normal individual these two tendencies alternate, neither dominating, but usually the libido is directed mainly in one direction nor the other; as a result, two personality types are recognizable.

Jung rejected Freud's distinction between the ego and superego and recognized part of the personality, similar to the superego, that he called the persona. The persona consists of what a person may be to others, in contrast to what he or she is. The persona is the role the individual chooses to play in life, the total impression he or she wishes to make on the outside world.

Alfred Adler, another of Freud's pupils, differed from both Freud and Jung in stressing that the motivating force in human life is the sense of inferiority, which begins when an infant can comprehend the existence of other people who are better able to care for themselves and cope with their environment. From the moment the feeling of inferiority is established, the child strives to overcome it. Because inferiority is intolerable, the compensatory mechanisms set up by the mind may get out of hand, resulting in - centred neurotic attitudes, overcompensations, and a retreat from the real world and its problems.

Adler laid particular stress on inferiority feelings arising from what he regarded as the three most important relationships: those between the individual and work, friends, and loved ones. The avoidance of inferiority feelings in these relationships leads the individual to adopt a life goal that is often not realistic and is frequently expressed as an unreasoning will to power and dominance, leading to every type of antisocial behaviour from bullying and boasting to political tyranny. Adler believed that analysis can foster a sane and rational “community feeling” that is constructive rather than destructive.

Another student of Freud, Otto Rank, introduced a new theory of neurosis, attributing all neurotic disturbances to the primary trauma of birth. In his later writings he described individual development as a progression from complete dependence on the mother and family, to a physical independence coupled with intellectual dependence on society, and finally to complete intellectual and psychological emancipation. Rank also laid great importance on the will, defined as “a positive guiding organization and integration of, which uses creatively and inhibits and controls the instinctual drives.”

Later noteworthy modifications of psychoanalytic theory include those of the American psychoanalysts’ Erich Fromm, Karen Horney, and Harry Stack Sullivan. The theories of Fromm lay particular emphasis on the concept that society and the individuals are not separate and opposing forces, that the nature of society is determined by its historic background, and that the needs and desires of individuals are largely formed by their society. As a result, Fromm believed, the fundamental problem of psychoanalysis and psychology is not to resolve conflicts between fixed and unchanging instinctive drives in the individual and the fixed demands and laws of society, but to cause harmony and an understanding of the relationship between the individual and society. Fromm also stressed the importance to the individual of developing the ability to use his or her mentality fully, emotional, and sensory powers.

Horney worked primarily in the field of therapy and the nature of neuroses, which she defined as of two types: situation neuroses and character neuroses. Situation neuroses arise from the anxiety attendant on a single conflict, such for being faced with a difficult decision. Although they may paralyse the individual temporarily, making it impossible to think or act efficiently, such neuroses are not deeply rooted. Character neuroses are characterized by a basic anxiety and a basic hostility resulting from a lack of love and affection in childhood.

Sullivan believed that all development can be described exclusively for interpersonal relations. Character types and neurotic symptoms are explained as results of the struggle against anxiety arising from the individual's relations with others and are some security systems, maintained for allaying anxiety.

An important school of thought is based on the teachings of the British psychoanalyst Melanie Klein. Because most of Klein's followers worked with her in England, this has become known as the English school. Its influence, nevertheless, is very strong throughout the European continent and in South America. Its principal theories were derived from observations made in the psychoanalysis of children. Klein posited the existence of complex unconscious fantasies in children under the age of six months. The principal source of anxiety arises from the threat to existence posed by the death instinct. Depending on how concrete representations of the destructive forces are dealt within the unconscious fantasy life of the child, two basic early mental attitudes result that Klein characterized as a “depressive position” and a “paranoid position.” In the paranoid position, the ego's defence consists of projecting the dangerous internal object onto some external representative, which is treated as a genuine threat emanating from the external world. In the depressive position, the threatening object is introjected and treated in fantasy as concretely retained within the person. Depressive and hypochondriacal symptoms result. Although considerable doubt exists that such complex unconscious fantasies operate in the minds of infants, these observations have been very important to the psychology of unconscious fantasies, paranoid delusions, and theory concerning early object relations.

Freud was born in Freiburg (now Pukbor, Czech Republic), on May 6, 1856, and educated at Vienna University. When he was three years old, his family, fleeing from the anti-Semitic riots then raging in Freiberg, moved to Leipzig. Shortly after that, the family settled in Vienna, where Freud remained for most of his life.

Although Freud’s ambition from childhood had been a career in law, he decided to become a medical student shortly before he entered Vienna University in 1873. Inspired by the scientific investigations of the German poet Goethe, Freud was driven by an intense desire to study natural science and to solve some challenging problems confronting contemporary scientists.

In his third year at the university Freud began research work on the central nervous system in the physiological laboratory under the direction of the German physician Ernst Wilhelm von Brücke. Neurological research was so engrossing that Freud neglected the prescribed courses and as a result remained in medical school three years longer than was required normally to qualify as a physician. In 1881, after completing a year of compulsory, military service, he received his medical degree. Unwilling to give up his experimental work, however, he remained at the university as a demonstrator in the physiological laboratory. In 1883, at Brücke’s urging, he reluctantly abandoned theoretical research to gain practical experience.

Freud spent three years at the General Hospital of Vienna, devoting him successively to psychiatry, dermatology, and nervous diseases. In 1885, following his appointment as a lecturer in neuropathology at Vienna University, he left his post at the hospital. Later the same year he was awarded a government grant enabling him to spend 19 weeks in Paris as a student of the French neurologist Jean Charcot. Charcot, who was the director of the clinic at the mental hospital, the Salpêtrière, was then treating nervous disorders by using hypnotic suggestion. Freud’s studies under Charcot, which entered largely on hysteria, influenced him greatly in channelling his interests to Psychopathology.

In 1886 Freud established a private practice in Vienna specializing in nervous disease. He met with violent opposition from the Viennese medical profession because of his strong support of Charcot’s unorthodox views on hysteria and hypnotherapy. The resentment he incurred was to delay any acceptance of his subsequent findings on the origin of neurosis.

Freud’s first published work, On Aphasia, appeared in 1891; it was a study of the neurological disorder in which the ability to pronounce words or to name common objects is lost because of organic brain disease. His final work in neurology, an article, “Infantile Cerebral Paralysis,” was written in 1897 for an encyclopedia only at the insistence of the editor, since by this time Freud was occupied largely with psychological than physiological explanations for mental illnesses. His subsequent writings were devoted entirely to that field, which he had named psychoanalysis in 1896.

During the early years of the development of psychoanalysis and even afterwards, Freud regarded himself as the bearer of painful truths that people, at least upon first hearing or reading, did not want to face. Psychoanalytically oriented therapy involves facting great pain in giving up certain deeply held, personally important beliefs. If it is understood, Nietzsche’s words would have touched a sympathetic chord in Freud when he wrote that ‘achievable things are truly productive are offensive’. Nietzsche insisted, as did Freud, On resisting the temptations toward easy answerers and superficiality in the face of painful truths. Nietzsche attributes’ of his present days that it is more need than ever of what continues to count as untimely-I mean: Telling the truth. (Even during some things, that truth can be reached and communicated.)

In 1894 The Antichrist and Nietzsche Contra Wagner (both completed in 1888) were first published, Nietzsche refers to himself as a psychologist in both works, referring to such works to his analysis as ‘the psychology of conviction, of faith’. He states that ‘one cannot be a psychologist or physician without at the same time being an anti-Christian,’ that ‘philology and medicine [are] the two great adversaries of superstition. That ‘Faith’ as an imperative is the veto against science.’ Nietzsche offers a psychological analysis of the powerful and primitive forces at work in the experience and condition of faith and a scathing attack on the Apostle Paul. Although Freud had no affectionate feeling for Paul, he was an atheist and understood religious experience and belief from a psychological perspective that was related to Nietzsche’s understanding (as well as Feuerbach to whom both Nietzsche and Feud were indebted. ” On particular importance for psychoanalysis (and for understanding Freud) of the idea of inventing a history (including of one’s self) to convene in the particular resource of needs.

From the early years in the development of psychoanalysis up until the present day, there have been substantial discussion and debate regarding the extent to which Nietzsche discovered and elaborated upon ideas generally ascribed to Freud as well as the extent to which Freud may have been influenced by Nietzsche in his development of a number of fundamental psychoanalytic concepts. In 1929 Thomas Mann, a great admirer of Freud, wrote: “He [Freud] was not acquainted with Nietzsche in whose work everywhere appear like gleams of insight anticipatory of Freud’s later views.” Mann considered Nietzsche to be “the greatest critic and psychologist of morals.” In an early study of the development Freud’s thought, their was suggested that Freud was not aware of certain philosophical influence’s on his thought, that Nietzsche “must perhaps be looked upon as the founder of disillusioning psychology,” that “Nietzsche’s division into Dionysian and Apollonian . . . is almost completely identical with that of the primary and secondary function [process],” an that Nietzsche and certain other writers “were aware that this ream had a hidden meaning and significance for our mental life.” Karl Jaspers, who contributed to the fields o psychiatry, depth psychology and philosophy, frequently commented on Nietzsche’s psychological insights and discussed Nietzsche in relation to Freud and psychoanalysis. In his text, General Psychopathology, only Freud appears more frequently than Nietzsche. He went sofar as to state that Freud and psychoanalysis have used ideas pertaining to the “meaningfulness of psychic deviation . . . in misleading way and this blocked the direct influence on [the study of] Psychopathology of great people such as Kierkegaard and Nietzsche” he wrote of Freud popularizing “in crude form” certain ideas elated to Nietzsche’s concept of sublimation.

Jones is to note of “a truly remarkable correspondence between Freud’s conception of the super-ego and Nietzsche’s exposition of the origin of the bad conscience,” Another analyst, Anzieu, offers a summary of Nietzsche’s anticipation of psychoanalytic concepts: It was Nietzsche who invented the term das Es (the id). He had some understanding of the economic point of view, which comprises discharge, and transfer of energy from one drive to another. However, he believed that aggression and self-destruction were stronger that sexuality. On several occasions he used the word sublimation (applying it to both the aggressive and the sexual instincts). He described repression, but called it inhibition, he talked of the super-ego and of quilt feelings, but called them resentment, bad conscience and false morality. Nietzsche also described, without giving them a name, the turning of drives against oneself, the paternal image, the maternal image, and the renunciation imposed by civilization on the gratification of our instincts. The “superman” was the individual who succeeded in transcending his conflict between established values and his instinctual urges, thus achieving inner freedom and establishing his privately personal morality and scale of values, in other words, Nietzsche foreshadowed what was to be one of the major aims of psychoanalytic treatment.

While there is a growing body of literature examining the relationship between the writings of Freud and Nietzsche, there has appeared no detailed, comprehensive study on the extent to which Freud may have been influenced by Nietzsche through the course of his life and the complex nature of Freud’s personal and intellectual relationship to Nietzsche. In part this may be attributed to Freud’s assurances that he had never studied Nietzsche, had never been able to get beyond the fist half page or so of any of his works due both in the overwhelming wealth of ideas and to the resemblance of Nietzsche’s ideas to the findings of psychoanalysis. In other words, Freud avoided Nietzsche in part to preserve the autonomy of the development of his own ideas.

Nietzsche and Freud were influenced by many of the same currents of nineteenth-century thought. Both were interested in ancient civilization, particularly Greek culture. Both were interested in Greek tragedy (and debates about catharsis), both particularly drawn to the figure of Oedipus. Both were interested in and attracted to heroic figures and regarded themselves as such. Both held Goethe in the highest regard, of course. They were influenced by Darwin, evolutionary theory, contemporary theories of energy, anthropology and studies of the origins of civilization. They were influenced by earlier psychological writings, including, possibly those of Hippolyte Taine (1828-1893). They were also influenced by a basic historical sense, “the sense of development and change that was now permeating thinking in nearly every sphere.” They wanted to understand, so to speak, the animal in the human and, as unmaskers, were concerned with matters pertaining to the relation between instinct and reason, conscious and unconscious, rational and irrational, appearance and reality, surface and depth. Both attempted to understand the origins and power of religion and morality, They were influenced by the Enlightenment and the hopes for reason and science while at the same time being influenced d by Romanticism’s preoccupations with the unconscious and irrational. While beginning their career’s in other fields, both came to regard themselves, among other things, as depth psychologists.

All the same, one has to keep in mind the extent to which Nietzsche and Freud were both influenced by forces at work in the German-speaking world of the latter part of the nineteenth century and the extent to which similarities in their thought might be attributed to such factors rathe that Nietzsche having a direct influence upon Freud.

For example, both Nietzsche and Freud were interested in anthropology, both read Sir John Lubbock (1834-1913) and Edward Tylor (1832-1917) and both were influence by the authors. However, an examination of the similarities between Nietzsche and Freud would seem to indicate that there is also the direct influence of Nietzsche upon Freud, so that Wallace, while till writes of Nietzsche’s anticipation of and influence upon Freud. Also, Thatcher, while writing of Nietzsche’s debt to Lubbock, writes specifically of Nietzsche’s, not Lubbock’s, “remarkable” anticipation of an idea central to Freud‘s Future’s of an Illusion.

One can also note Nietzsche’s inclinations to use medical terminology in relation to psychological observation and “dissection”“: At its present state as a specific individual science the awakening of moral observation has become necessary and humans can no longer be spared the cruel sight of the moral dissection table and its knives and forceps. For here the ruled that science that asks after the origin and history of the so-called sensations.

Freud wrote of analysts modelling themselves on the surgeon “who untied all feeling, even his human sympathy, and concentrates hid metal forces on the single aim of performing the operation as skilfully as possible.”:The most successful cases are those in which on process, as it was, without any purpose in view, allows itself to be taken by surprise by any new turn in them, and always supported with an open mind, free from any presuppositions.”

In regard to broad cultural change and paradigm changes Nietzsche was one of the thinkers that herald the effectuality about such changes. In the book on Freud’s social thought, Berliner rites of the changes in intellectual orientation that occurred around 1885, stating that such changes were “reflected in the work of Friedrich Nietzsche. Beliner, goes on to mention some of Nietzsche’s contributions to understanding the human mind, conscience and civilisation’s origin his being representative of ‘uncovering’ or ‘unmasking’ psychology. Berliner concludes, as have other, that: That generation of his [Freud’s] young maturity was permeated with the thought of Nietzsche.”

Nevertheless, although Feud expressed admiration for Nietzsche on a number of occasions, acknowledged his “intuitive” grasp on the concepts anticipating psychoanalysis, placed him among a few of persons he considered great and stated in 1908 that “the degree of introspection achieved by Nietzsche had never been achieved by anyone, nor is it likely ever to be reached again,” he never acknowledges studying specific works of Nietzsche at any length or in any detail what his own thoughts was in regard to specific works or ideas of Nietzsche.

Since whenever an idea of Nietzsche’s that may have influenced Freud is discussed without tracing the influence and development in Nietzsche, and it possibly appearing as if it is being suggested that Nietzsche formulated his ideas without the great help of his forerunners, perhaps taking note of the following words of Stephen Jay Gould regarding our discomfort with evolutionary explanations would be useful at this point: “one reason must reside in our social and psychic attraction to creation myths in it preferences to the evolutionary assemblage for creative myths - . . . identify heroes and sacred places, while evolutionary assemblage provides no palpable particularity, objects as symbols for reverence, worship or patriotism.” Or as Nietzsche put it . . . “Whenever one can see the act of becoming [in contrast to ‘present completeness and perfection’] one grows comparatively cool.

It may, perhaps, be that the imbuing of myth within our lives, in this instance the myth. of the hero (with implications for our relationship to Nietzsche and Freud, the relationships themselves a heroes and Freud’s relationship to Nietzsche), is not so readily relinquished even in the realm of scholarly pursuits, a notion Nietzsche elaborated upon on a number of occasions.

Nietzsche discusses the origins of Greek tragedy in the creative integration of what he refers to as Dionysian and Apollonian forces, named for the representation in the gods Apollo and Dionysus. Apollo is associated with law, with beauty and order, with reason, with self-control and self-knowledge, with the sun and light. Dionysus is associated with orgiastic rites, music, dance and later drama. He is the god of divinity, whom of which is ripped into pieces, dismembered (representing individuation), and whose rebirth is awaiting (the end of individuation) religious rituals associated with him enact and celebrate death, rebirth and rituals associated with crops, including the grape (and wine and intoxication), and with sexuality. Frenzied, ecstatic female worshippers (maenads) ae central to the rituals and celebration. Both gods have a home in Delphi, Dionysus reigning in the winter when his dances are performed there.

In a note from The Will to Power Nietzsche defines the Apollonian and to Dionysian: The word “Dionysia” mean: An urge to unity, a reaching out beyond personality, the every day, social, reality, across the abysmal transitoriness, a passionate-painful overflowing into darker, fuller more floating states, . . . the feeling of the necessary unity of creation and destruction. One contemporary classical scholar writes of “the unity of salvation and destruction, . . . [as] a characteristic feature of all that is tragic.

The word “Apollinian” means: The urge to perfect self-sufficiency, to the typical “individuality” to all that simplified distinguishing, makes strong, closer, unambiguous, typical freedom under the law.

Nietzsche announces, that with admirable frankness that he is no longer a Christian, but he does not wish to disturb anyone’s piece of mind. Nietzsche writes of Strauss’ view of a new scientific man and his “faith” that “the heir of but a few hours, he is ringed around with frightful abysses, and every gaiting step taken ought to make him ask: “Where? From what place. Or to what end? However, rather than facing such frightful questions, Strauss’ scientific man seems to be permitted to such a life on questions whose answer could a bottom be of consequence only to someone assured of eternity. Perhaps in knowing, it, also tended to encourage the belief that, as once put, that in all men dance to the tune of an invisible piper, least of mention, many things must be taken to consider that all things must be known, in that the stray consequences of studying them will disturb the status quo, which can never therefore be discovered. History is not and cannot be determined. The supposed causes may only produce the consequences we expect.

Perhaps of even a grater importance resides of Human, All Too Human. We have already commented of sublimation, however, to explicate upon a definite definition of such rights to or for sublimation it seems implicitly proper to state that sublimation would modify the natural expression of (a primitive, instinctual impulse) in a socially acceptable manner, and thus to divert the energy associated with (an unacceptable impulse or drive) into a personally and socially acceptable activity. It is nonetheless, as Young points out, that Nietzsche heralds a new methodology. He contrasts metaphysical philosophy with his historical [later genealogical] philosophy. His is a methodology for philosophical inquiry into the origins of human psychology, a methodology to be separated with natural sciences. This inquiry “can no longer be separated from natural science,” and as he will do on other occasions, he offers a call to those who might have the ears to hear: “Will there be many who desire to purse such researchers? People likes to put questions of origins and beginnings out of its mind, must one not be almost inhuman to detect in oneself a contrary inclination?

Nietzsche writes of the anti-nature of the ascetic ideal, how it relates to a disgust with itself, its continuing destructive effect upon the health of Europeans, and how it related to the realm of “subterranean revenge” and ressentiment. Nietzsche writes of the repression of instincts (though not specifically of impulses toward sexual perversions) and of their bring turned inward against the self, “instinct for freedom forcibly made latently . . . this instinct for freedom-pushed back and repressed.” Also, “this hatred of the human, and even more is the animal, yet, and, still of the material.” Zarathustra also speaks of the tyranny of the holy or sacred”: He once loved ‘thou shalt’ as most sacred, now he mut finds illusion and caprice even in the most sacred, that freedom from his love may become his prey, the lion is needed for such prey. It would appear that while Freud’s formation as it pertains to sexual perversions and that incest is most explicitly not driven from Nietzsche (although along different line incest was an important factor in Nietzsche’ understanding of Oedipus), the relating of the idea of the holy to the sacrifice or repression of instinctual freedom was very possibly influenced by Nietzsche, particularly in light of Freud’s reference to the ‘holy’ as well as to the ‘overman’. These issues were also explored in The Antichrist that hd been published just to years earlier. In addition, Freud wrote, and, perhaps for the first time, of sublimation: “In have gained a sure inking of the structure of hysteria. Everything goes back to the reproduction of scenes. Some can be obtained directly, other ways by fantasies are weighed up in front of them. The fantasies stem from things that have been heard but understood subsequently, and all their material is of course genuine. They are protective structures, sublimation’s of the fact, embellishment of them, and at the same time serve for self-relief.”

Nietzsche had written of sublimation and he specifically wrote of the sublimation of sexual drives in the Genealogy. Freud’s use of the term differs slightly from his later and more Nietzschean usage such as in Three Essays on the Theory of Sexuality, but as Kaufmann notes, while “the word is older an either Freud or Nietzsche . . . it was Nietzsche who first gave it the specific connotation it has today.” Kaufmann regards the concept of sublimation as one of the most important concepts in Nietzsche’s entire philosophy. Furthermore, Freud wrote that a ‘presentiment’ tells him, “I shall very soon uncover the source of morality’, this is the very subject of Nietzsche’s Genealogy.

At a later time in his life Freud claimed he could not read more than a few passages of Nietzsche due to being overwhelmed by the wealth of ideas. This claim might be supported by the fact that Freud demonstrates only a limited understanding of certain of Nietzsche’s concepts. For example, his reference to the “overman” to which demonstrates a lack of understanding of the overman as a being of the future whose freedom involves creative self-overcoming and sublimation, not simply freely gratified primitive instincts. Later in life, in Group Psychology and the Analysis of the Ego. Freud demonstrates a similar misunderstanding in his equating the overman with the tyrannical father of the primal horde. Perhaps Freud confused the overman with the “master” whose morality is contrasted with that of “slave” morality in the Genealogy and Beyond Good and Evil. The conquering master more freely gratifies instinct and affirms himself, his world and his values as good. The conquering slave, unable to express himself freely, creates a negating, resentful, vengeful morality glorifying his own crippled, alienate condition, and he creates a division not between good (noble) and bad (contemptible), but between good (undangerous) and evil (wicked and powerful - dangerous slave moralities’ at times . . . occur within a singe soul).

Although Nietzsche never gave dreams anything like the attention and analysis given by Freud, he was definitely not one of, “the dark forest of authors who do not see the trees, hopefulessly lost on wrong tracks.” Yet, where he is reviewing the literature on dream, as well as throughout his life, Freud will not, in specific and detailed terms, discuss Nietzsche’s ideas as they pertain to psychoanalysis, just as he will never state exactly when he read or did not read Nietzsche or what he did or did not read. We may never know which of Nietzsche’s passages on dreams Freud may have read or heard of or read of as he was working on The Interpretation of Dreams. Freud’s May 31, 1897, a letter to Fliess includes reference to the overman, contrasting this figure with the saintly or holy which is (as is civilization) connected to instinctual renunciation, particularly incest and sexual perversion. Freud also writes that he has a presentiment that he shall “soon uncover the source of morality,” the subject of Nietzsche’s Genealogy. Earlier, he made what may have been his first reference to sublimation, a concept explored and developed by Nietzsche. We have also pointed to the possible, perhaps even likely, allusions to Nietzsche in letters of September and November 1897 which refer respectively to Nietzsche’s notion of a revaluation or transvaluations of all values and Nietzsche’s idea of his relationship of our turning our nose away from what disgust us, our own filth, to our civilized condition, our becoming “angles.” Nonetheless, Freud adds specifically that so too consciousness turns away from memory: “This is repression.” Then there is Nietzsche’s passage on dreams in which he refers to Oedipus and to the exact passage that Freud refers to in The Interpretation of Dreams. One author has referred to Nietzsche’s idea as coming “preternaturally close to Freud.” At a later point we see that in Freud’s remarks in The Interpretation of Dreams on the distinctiveness of psychoanalysis and his achievements regarding the understanding of the unconscious (his unconscious versus the unconscious of philosophers), Nietzsche is perhaps made present through his very absence.

These ideas of Nietzsche’s on dreams are not merely of interest in regard to the ways in which they anticipate Freud. They are very much related to more recent therapeutic approaches to the understanding of dreams: Nietzsche values dreaming states over waking states regarding the dream’s closeness to the “ground of our being,” the dream “informs” us of feelings and thoughts that “we do not know or feel precisely while awake,” in dreams “there is nothing unimportant or superfluous,” the language of dreams entails ‘chains of symbolical scenes’ and images in place of [and akin to] the language of poetic narration, content, form, duration, performer, spectator - in these comedies you are all of this yourself (and these comedies include the “abominable”). Recent life experiences and tensions, “the absence of nourishment during the day, gives rise to these dream inventions which “give scope and discharge to our drives.”

The self, as in its manifestations in constructing dreams, may be an aspect of our psychic lives that knows things that our waking of “I” or ego may not know an may not wish to know, and a relationship may be developed between these aspects of our psychic lives in which the later opens itself creatively to the communications of the former. Zarathustra states: “Behind your thoughts and feelings, my brother, there stands a mighty ruler, an unknown sage - whose name is self. In your body he dwells, he is your body.” However, Nietzsche’s self cannot be understood as a replacement for an all-knowing God to whom the “I” or, ego appeals for its wisdom, commandments, guidance and the like. To open onself to another aspect oneself that is wiser (“an unknown sage”) in the sense that new information can be derived from it, does not necessarily entail that this “wiser” component of one’s psychic life has God-like knowledge and commandments which if one (one’s “I”) interprets and opens correctly a will set one on the straight path. It is true though that what Nietzsche writes of the self as “a mighty ruler and unknown sage” he does open himself to such an interpretation and even to the possibility that this “ruler”: is unreachable, unapproachable for the “I?” However, the context of the passage (Nietzsche/Zarathustra redeeming the body) and the two sections thereafter are “On the Despisers of the Body” make it clear that there are aspects of our psychic selves that interpret the body, that mediate its direction, ideally in ways that do not deny the body but that aid in the body doing “what it would do above all else, to create beyond itself.

Nietzsche explored the ideas of psychic energy and drives pressing for discharge. His sublimation typically implies an understanding of drives in just such a sense as does his idea that dreams provide for discharge of drives. However, he did not relegate all that is derived from instinct and the body to this realm. While for Nietzsche there is no stable, enduring true self awaiting discovery and liberation, the body and the self (in the broadest sense of the term, including what is unconscious and may be at work in dreams as Rycroft describes it) may offer up potential communication and direction to the “I” or ego. However, at times Nietzsche describes of the “I” or ego as having very little, if any, idea as to how it is being lived by the “it.”

Nietzsche like Fred, describes two types’ mental processes, on which “binds” [man’s] life to reason and it concept in order not to be swept away by the current and to lose himself, the other, pertaining to the world of myth, art an the dream, “constantly showing the desire to shape the existing world of the wide-a-wake person to be variegatedly irregular and disinterestedly incoherent, exciting and eternally new, as the world of dreams.” Art may function as a “middle sphere” and middle faculty (transitional sphere and faculty) between a more primitive “metaphor-world”: of impressions and the forms of uniform abstract concepts.

All the same, understanding what Freud could mean by not reading Nietzsche in his later years is difficult as well as to determine if his is acknowledged of having read Nietzsche in earlier years. Freud never tells us exactly what he read of Nietzsche and never tells us exactly which years were those during which he avoided Nietzsche. We do know of course, that a few years earlier, in 1908. Freud has read and discussed Nietzsche, including a work of direct relevance to his own anthropological explorations as well as to ideas pertaining to the relationship between repression of instinct and the development of the inner world and conscience. We have also seen that lectures, articles and discussions on Nietzsche continue around Freud. It does seem though that Freud demonstrates a readiness to “forgo all claims to priority” regarding the psychological observations of Nietzsche and others that the science of psychoanalysis has confirmed.

Nevertheless, Nietzsche recognized the aggressive instinct and will to power in various forms and manifestations, including sublimated mastery, all of which are prominent in Freud’s writings.

We can also take note with which the work Freud ascribed of the power and importance of rational thinking and scientific laws. Freud writes that the World-View erected upon science conceals the “submission to the truth and rejection of illusions.” He writes, quoting Goethe, of “Reason and Science, the highest strength possessed by man,” and of “the bright world governed by relentless laws which has been constructed for us by science.” However, he also writes discipline, and a resistance stirs within us against the relentlessness nd monotony of the laws of thought and against the demands of reality-testing. Reason becomes the enemy which withholds from us so many possibilities of pleasure.

However, bright the world of science is and however much reason and science represent “the highest strength possessed by man,” this world, these laws, these faculties, require from us “submission” to a withholding enemy that imposes “strict discipline” with “relentlessness and monotony.” However much this language pertains to a description of universal problems in human development, one may wonder it does not reflect Freud’s own experience of the call of reason as a relentless (labouriously) submission.

There is no reason that empirical research cannot be of help in determining what kinds of “self-description” or narratives (as well as, of course, many other aspects of the therapeutic process) may be effective for different kinds of persons with different kinds of difficulties in different kinds of situations. From a Nietzschean perspective, while it is obvious and desirable that the therapist will influence the patient’s or client ‘s self-description and narratives, and the converse as well, a high value will be placed, however, much it is a joint creation of a shared reality, on encouraging the individual to fashion a self-understanding, self-description or narrative that is to a significant extent of his or her own creation. That on has been creative in this way (and hopefully can go on creating) will be a very different experience than having the therapist narrative is simply replacing the original narrative brought to therapy can be thought of and the individual’s increase capacity for playful creative application of a perspectivist approach to his or her life experience and history, though this approach, as any other, would be understood as detached most significantly and related to the sublimation of drives as an aspect of the pursuit of truth. This does not entail that one that one searches with the understanding that what one finds was not uncovered like an archeological find.

Both Freud and Nietzsche are engaged in a redefinition of the root of subjectivity, a redefinition that replaces the moral problematic of selfishness with the economic problematic of what Freud would call narcissism . . . [Freud and Nietzsche elaborate upon] the whole field of libidinal economy: The transit of libido through other selves, aggression, infliction and reception of pain, and something very much like death (the total evacuations of the entire quantum of excitation with which the organism is charged.)

The id, ego and superego effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies ‘Triebe’, which literally means “drives,” but which is often inaccurately translated as “instincts” to indicate their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be brought about is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. In order to fulfill its function of adaptation, or reality testing, the ego must be capable of enforcing the postponement of satisfaction of the instinctual impulses originating in the id. To defend itself against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishments of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only as a result of a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. The totality of these demands and prohibitions constitutes the major content of the third system, the superego, the function of which is to control the ego in accordance with the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can give rise to feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

Nietzsche suggests that in our concern for the other, in our sacrifice for the other, we are concerned with ourselves, one part of ourselves represented by the other. That for which we sacrifice ourselves is unconsciously related to as another part of us. In relating to the other we are in fact relating to a part of ourselves and we are concerned with our own pleasure and pain and our own expression of will to power. In one analysis of pity Nietzsche states that, we are, to be sure, not consciously thinking of ourselves tat it is primarily our own pleasure and pain that we are concerned about and that feelings an reactions that ensue are concerned about and that feelings and reactions that ensue are multi-determined.

Nietzsche has divided nature and that we respond to others in part on the basis of projecting and identifying with aspects of ourselves in them. From Human, All Too Human, Nietzsche writes to a deception in love - We forget a great deal of our own past and deliberately banish it from our minds . . . we want the image of ourselves that shines upon us out of the past to deceive us and flatter our self-conceit - we are engaged continually on this self-deception. Do you think, you who speak so much of ‘self-forgetfulness in love’, of ‘the merging of the ego in the other person’, and laud it so highly, do you think this is anything essentially differently? We shatter the mirror, impose our self upon someone we admire, and then enjoy our ego’s new image, even though we may call it by that other person’s name.

It is commonplace that beauty lies in the eye of the beholder, but all the same, we valuably talk of the beauty of a thing and people as if they are identifiable real properties which they possess. Projectivism denotes any view which sees us similarly projecting upon the world what is in fact modulations of our own minds. According to this view, sensations are displaced from their rightful place in the mind when we think of the world as coloured or noisy. Other examples of the idea involve things other than sensations, and do not consist of any literal displacement. One is that all contingency is a projection of our ignorance, another is that the causal order of events in a projection of our mental confidences in the way they follow from one another. However, the most common application of the idea is in ethics and aesthetics, where man writers have held that talk of the value or beauty of things is a projection of the attitudes we take toward them and the pleasure we take in them.

It is natural to associate Projectivism with the idea that we make some kind of mistake in talking and thinking as if the world contained the various features we describe it as having, when in reality it does not. Only, that the view that we make is no mistake, but simply adopt efficient linguistic expression for necessary ways of thinking, is also held.

Nonetheless, in the Dawn, Nietzsche describes man, in the person of the ascetic, as ‘split asunder into a sufferer and a spectator’, enduring and enjoying within (as a consequence of his drive for ‘distinction’, his will to power) that which the barbarian imposes on others. As Staten points out, Nietzsche asks if the basic disposition of the ascetic and of the pitying god who creates suffering humans can be held simultaneously, an that one would do ‘hurt to others in order thereby to hurt oneself, in order then to triumph over oneself and one’s pity and revel in an extremity of power. Nietzsche appears to be suggesting that in hurting the other In may, through identification, be tempting to hurt one part of myself, so that whatever my triumph over the other, In may be as concerned with one part of my self-triumphing over that par of myself In identify within the other as well as there by overcoming pity and in consequence ‘revel in an extremity of power.’ (Or in a variation of such dynamics, as Michel Hulin has put it, the individual may be ‘tempted to play both roles at once, contriving to torture himself in order to enjoy all the more his own capacity for overcoming suffering’.)

In addition to Nietzsche’s writing specifically of the sublimation of the libidinous drive, the will to power and it vicissitudes are described at times in ways related to sexually as well as aggressive drives, particularly in the form of appropriation and incorporation. As Staten points out, this notion of the primitive will to power is similar to Freud’s idea in Group Psychology and the Analysis of the Ego according to which, ‘identification [is] the earliest expression of an emotional tie with another person . . . It behaves like a derivation of the first oral phase of the organization of the libido, in which the object that we long for and prize is assimilated by eating. It would appear that Nietzsche goes a step further than Freud in one of his notes when he writes: ‘Nourishment -is only derivative, the original phenomenon is, to desire to incorporate everything’. Staten also concludes that, ‘if Freudian libido contains a strong element of aggression and destructiveness, Nietzschean will to power never takes place without a pleasurable excitation that there is no reason not to call erotic. However, that of ‘enigma and cruelty’, that it is only imposed on the beloved object and increases in proposition to the love . . . Cruel people being always masochist also, the whole thing is inseparable from bisexuality: One can only imagine how far Nietzsche and to what extent he would expand of insights other than Freud.

Freud’s new orientation was preceded by his collaborative work on hysteria with the Viennese physician Josef Breuer. The work was presented in 1893 in a preliminary paper and two years later in an expanded form under the title Studies on Hysteria. In this work the symptoms of hysteria were ascribed to manifestations of undischarged emotional energy associated with forgotten psychic traumas. The therapeutic procedure involved the use of a hypnotic state in which the patient was led to recall and reenact the traumatic experience, thus discharging by catharsis the emotions causing the symptoms. The publication of this work marked the beginning of psychoanalytic theory formulated based on clinical observations.

From 1895 to 1900 Freud developed many concepts that were later incorporated into psychoanalytic practice and doctrine. Soon after publishing the studies on hysteria he abandoned the use of hypnosis as a cathartic procedure and substituted the investigation of the patient’s spontaneous flow of thoughts, called free association, to reveal the unconscious mental processes at the root of the neurotic disturbance.

Nietzsche discusses the origins of Greek tragedy in the creative integration of what he calls Dionysian and Apollonian forces. Apollo is associated with law, with pounding order, with reason with containing knowledge, with the sun and light. Dionysus is associated with orgastic rites, music, dance and later drama. Religious rituals associated with him enact and celebrate death, rebirth and fertility. He is also associated with crops, including the grape (and the wine of intoxication), and with sexuality. Frenzied, ecstatic female worshippers (maenads) are central to the rituals and celebrations.

In a note from The Will to Power Nietzsche brings to light the Apollonian and the Dionysian as: The word ‘Dionysian’ is meant of an urge to unity, a reaching out beyond personality, the every day, society, reality, across the abyss of transitoriness: A passionate-painful overflowing into dark, Nietzsche more floating stats, . . . the feeling of the necessary unity of creation and destruction. [One contemporary classical scholar writes of ‘the unity of salvation and destruction . . . (as) a characteristic feature of all that is tragic.]

The word ‘Apollinian’ is meant, among other things, as the urge to perfect -sufficiency, to the typical ‘individual’, to all that simplifies, distinguishes, makes strong, clear, unambiguous, typical, and freedom under the law. Apollo is described as a dream interpreter.

Yet, all the same, we might discern Nietzsche’s influence in an important paper of this period, the 1914 paper ‘On Narcissism: An Introduction. In this paper, Freud explores, among other things, the effects of his finding of an original libidinal cathexis of the ego, from which some is later given off to objects, which fundamentally persists and is related to the object-cathexes much as the body of an amoeba is related to the pseudopodia out which it puts.

The development of the ego consists in a departure from the primary narcissism and results in a vigorous attempt to recover that state. Means of the displacement cause this departure of the libido onto an ego-ideal imposed from without, and satisfaction is caused from fulfilling this ideal. Simultaneously, the ego has sent out the libidinal object-cathexes. It becomes impoverished in favour of these cathexes, just as it does in favour of the ego-ideal, and it enriches it again from it satisfaction in respect of the object, just as it does by fulfilling its ideal.

Freud considers the implications of these findings for his dual instinct theory that divides instincts into the duality of ego instincts and libidinal instincts. Freud questions this division, but does not definitely abandon it, which he will later do in, Beyond the Pleasure Principle.

As indicated, one of Freud’s important points is that the ego tries to recover its state of primary narcissism. This is related to important theme s running through Nietzsche’s writings. Nietzsche is aware of ho we relate to others based on projections of idealized images of ourselves, and he is consistently looking for the way in which we are loving ourselves and aggrandizing ourselves in activities that reflect contrary motivations.

Nietzsche attempts to show that Greek culture and drama had accomplished the great achievement of recognising and creatively integrating the substratum of the Dionysian with the Apollonian. As Siegfried Mandel construed to suggest, Nietzsche destroyed widely held aesthetic views, inspired in 1755 by the archaeologist-historian Johann Winckelmann, about the ‘noble simplicity, calm grandeur’, ‘sweetness and light’, harmony and cheerfulness of the ancient Greeks and posed instead the dark Dionysia force’s that had to be harnessed to makes possible the birth of tragedy.

It is also important to consider that it is through the dream’s Apollonian images that the Dionysian reality can be manifested and known, as it is through the individuated actors on stage that the underlying Dionysian reality is manifested in Greek tragedy. As it is most creative, the Apollonian can allow an infusion of the harnesses in the Dionysian, but we should also note that Nietzsche is quite explicit that when the splendour of the Apollonian impulse is stood before an art that in it frenzies, rapture and excess ‘spoke the truth -. Excess revealed it as truth’. The Dionysian, and . , . . against this new power the Apollonia rose to the austere majesty of Doric art and the Doric view of the world. For Nietzsche, ‘Dionysian and the Apollonian, in new births ever following and mutually augmenting of one, another, controls led the Hellenic genius.’

Nietzsche is unchallenged as the most sightful and powerful critics of the moral climate of the 19th century (and of what remains in ours). His exploration of bringing forth an acknowledged unconscious motivation, and the conflict of opposing forces within the mindful purposes of possibilities of creative integration. Nietzsche distinguishes between two types of mental processes and is aware of the conflict between unconscious instinctual impulses and wishes and inhibiting or repressing forces. Both Freud and Nietzsche are engaged in a redefinition of the root of subjectivity, a redefinition that replaces the moral problem of issues concerning the economic problem of what Freud would call narcissism, . . . Freud and Nietzsche elaborate upon the whole field of libidinal economy: The transit of the libido through other selves, aggression, infliction and reception of pain, and something very much like death, the total evacuation of the entire quantum of excitation that the organism is charged.

The real world is flux and change for Nietzsche, but in his later works there is no “unknowable true world.” Also, the splits between a surface, apparent world and an unknowable but a true world of the things-in-themselves were, as is well known, a view Nietzsche rejected. For one thing, as Mary Warnock points out, Nietzsche was attempting to get across the point that there is only one world, not two. She also suggests that for Nietzsche, if we contribute anything to the world, it be the idea of a “thing,” and in Nietzsche’s words, “the psychological origin of the belief in things forbids us to speak of things-in-themselves.”

Nietzsche holds that there is an extra-mental world to which we are related and with which we have some kind of fixation. For him, even as knowledge develops in the service of - preservation and power, to be effective, a conception of reality will have a tendency to grasp (but only) a certain amount of, or aspect of, reality. However much Nietzsche may at times see (the truth of) artistic creation and dissimulation (out of chaos) as paradigmatic for science (which will not recognize it as such), in arriving art this position Nietzsche assumes the truth of scientifically based beliefs as a foundation for many of his arguments, including those regarding the origin, development and nature of perception, consciousness and - consciousness and what this entails for our knowledge of and falsification of the external and inner world. In fact, to some extent the form-providing, affirmative, this-world healing of art is a response to the terrifying, nausea-inducing truths revealed by science that by it had no treatment for the underlying cause of the nausea. Although Nietzsche also writes of the horrifying existential truths, against which science can attempt a [falsifying] defence. Nevertheless, while there is a real world to which we are affiliated, there is no sensible way to speak of a nature or constitution or eternal essence of the world by it apart from description and perceptive. Also, states of affairs to which our interpretations are to fit are established within human perspectives and reflect (but not only) our interests, concerns, needs for calculability. While such relations (and perhaps as meta-commentary on the grounds of our knowing) Nietzsche is quite willing to write of the truth, the constitution of reality, and facts of the case. There appears of no restricted will to power, nor the privilege of absolute truth. To expect a pure desire for a pure truth is to expect an impossible desire for an illusory ideal.

In the articulation comes to rule supreme in oblivion, either in the individual’s forgetfulness or in those long stretches of the collective past that have never been and will never be called forth into the necessarily incomplete articulations of history, the record of human existence that is profusely interspersed with dark passages. This accounts for the continuous questing of archeology, palaeontology, anthropology, geology, and accounts, too, for Nietzsche’s warning against the “insomnia” of historicisms. As for the individual, the same drive is behind the modern fascination with the unconscious and, thus, with dreams, and it was Nietzsche who, before Freud, spoke of forgetting as an activity of the mind. At the beginning of his, Genealogy of Morals, he claims, in defiance of all psychological “shallowness,” that the lacunae of memory are not merely “passive” but the outcome of an active and positive “screening,” preventing us from remembering what would upset our equilibrium. Nietzsche is the first discoverer of successful “repression,” the burying of potential experience in the articulation, that is, as moderately when the enemy territory is for him.

Still, he is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes once it is denied of its basis in action, and the corruptions of human nature encouraged by religions, such as Christianity, that feed on such resentment. Yet the powerful human being who escapes all this, the ‘Übermensch’, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style of his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist than the world warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave him no words to condemn any uncaged beast of prey who vests finds their style by exerting repulsive power over others. Nietzsche’s frequently expressed misogyny does not help this problem, although in such matters the interpretation of his many-layered and ironic writing is not always straightforward. Similarly, such anti-Semitism, as found in his work is in an equally balanced way as intensified denouncements of anti-Semitism, and an equal or greater contempt of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of the will, but more deeply from his scepticism about the notions of truth and fact. In particular, he anticipated many central tenets of postmodernism: An aesthetic attitude toward the world that sees it as a ‘text’, the denial of facts: The denial of essences, the celebration of the plurality of interpretations and of the fragmented and political discourse all for which are waiting their rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his perspectives are echoed in the shifting array of literary devices - humour, irony, exaggeration, aphorisms, verse, dialogue, parody with which he explores human life and history.

All the same, Nietzsche is openly pessimistic about the possibility of knowledge: ‘We simply lack any organ for knowledge, for ‘truth’: We ‘know’ (or believe or imagine) just as much as may be useful in the interests of the human herd, the species, and perhaps precisely that most calamitous stupidity of which we shall perish some day’ (The Gay Science).

Nonetheless, that refutation assumes that if a view, as perspectivism it, is an interpretation, it is by that very fact wrong. This is not so, however, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is really false producing another view superior to it on specific epistemological grounds is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, it attributes to specific approaches’ truth in relation to facts themselves. Still, it refused to envisage a single independent set of facts, and accounted for by all theories. Thus, Nietzsche grants the truth of specific scientific theories: He does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’: Neither the fact’s science addresses nor the methods serve the purposes for which they have been devised: Nonetheless, these have no priority over the many others’ purposes of human life.

Every schoolchild learns eventually that Nietzsche was the author of the shocking slogan, "God is dead." However, what makes that statements possible are another claim, even more shocking in its implications: "Only that which has no history can be defined" (Genealogy of Morals). Since Nietzsche was the heir to seventy-five years of German historical scholarship, he knew that there was no such thing as something that has no history. Darwin had, as Dewey points out that effectively shows that searching for a true definition of a species is not only futile but unnecessary (since the definition of a species is something temporary, something that changes over time, without any permanent lasting and stable reality). Nietzsche dedicates his philosophical work to doing the same for all cultural values.

Reflecting it for a moment on the full implications of this claim is important. Its study of moral philosophy with dialectic exchange that explores the question "What is virtue?" That takes a firm withstanding until we can settle that of the issue with a definition that eludes all cultural qualification. What virtue is, that we cannot effectively deal with morality, accept through divine dispensation, unexamined reliance on traditions, skepticism, or relativism (the position of Thrasymachus). The full exploration of what deals with that question of definition might require takes’ place in the Republic.

Many texts we read subsequently took up Plato's challenge, seeking to discover, through reason, a permanent basis for understanding knowledge claims and moral values. No matter what the method, as Nietzsche points out in his first section, the belief was always that grounding knowledge and morality in truth was possible and valuable, that the activity of seeking to ground morality was conducive to a fuller good life, individually and communally.

To use a favourite metaphor of Nietzsche's, we can say that previous systems of thought had sought to provide a true transcript of the book of nature. They made claims about the authority of one true text. Nietzsche insists repeatedly that there be no single canonical text; There are only interpretations. So, there is no appeal to some definitive version of Truth (whether we search in philosophy, religion, or science). Thus the Socratic quest for some way to tie morality down to the ground, so that it does not fly away, is (and has always been) futile, although the long history of attempts to do so has disciplined the European mind so that we, or a few of us, are ready to move into dangerous new territory where we can situate the most basic assumptions about the need for conventional morality to the test and move on "Beyond Good and Evil," that is, to a place where we do not take the universalizing concerns and claims of traditional morality seriously.

Nietzsche begins his critique here by challenging that fundamental assumption: Who says that seeking the truth is better for human beings? How do we know an untruth is not better? What is truth anyway? In doing so, he challenges the sense of purpose basic to the traditional philosophical endeavour. Philosophers, he points out early, may be proud of the way they begin by challenging and doubting received ideas. However, they never challenge or doubt the key notion they all start with, namely, that there is such a thing as the Truth and that it is something valuable for human beings (surely much more valuable than its opposite).

In other words, just as the development of the new science had gradually and for many painfully and rudely emptied nature of any certainty about a final purpose, about the possibilities for ever agreeing of the ultimate value of scientific knowledge, so Nietzsche is, with the aid of new historical science (and the proto-science of psychology) emptying all sources of cultural certainty of their traditional purposiveness and claims to permanent truth, and therefore of their value, as we traditionally understood that of the term. There is thus no antagonism between good and evil, since all versions of equal are equally fictive (although some may be more useful for the purposes of living than others).

At this lodging within space and time, In really do not want to analyse the various ways Nietzsche deals with this question. Nevertheless, In do want to insist upon the devastating nature of his historical critique on all previous systems that have claimed to ground knowledge and morality on a clearly defined truth of things. For Nietzsche's genius rests not only on his adopting the historical critique and applying to new areas but much more on his astonishing perspicuity in seeing just how extensive and flexible the historical method might be.

For example, Nietzsche, like some of those before him, insists that value systems are culturally determined they arise, he insists, as often as not form or in reaction to conventional folk wisdom. Yet to this he adds something that to us, after Freud, may be well accepted, but in Nietzsche's hands become something as shocking: Understanding of a system of value is, he claims, requires us more than anything else to see it as the product of a particular individual's psychological history, a uniquely personal confession. Relationship to something called the "Truth" has nothing to do with the "meaning" of a moral system; as an alternative we seek its coherence in the psychology of the philosopher who produced it.

Gradually, in having grown into a greater clarity of what every great philosophy has endearingly become, as staying in the main theme of personal confessions, under which a kind of involuntary and an unconscious memoir and largely that the moral (or immoral) intentions in every philosophy formed the real germ of life from which the whole plant had grown.

A concentration has here unmasked claims to “truth” upon the history of the life of the person proposing the particular "truth" this time. Systems offering us a route to the Truth are simply psychologically produced fictions that serve the deep (often unconscious) purposes of the individual proposing them. Therefore they are what Nietzsche calls "foreground" truths. They do not penetrate into the deep reality of nature, and, yet, to fail to see this is to lack "perspective."

Even more devastating is Nietzsche's extension of the historical critique to language it. Since philosophical systems deliver themselves to us in language, that language shapes them and by the history of that language. Our Western preoccupation with the inner for which perceivable determinates, wills, and so forth, Nietzsche can place a value on as, in large part, the product of grammar, the result of a language that builds its statements around a subject and a predicate. Without that historical accident, Nietzsche affirms, we would not have committed an error into mistaking for the truth something that is a by-product of our particular culturally determined language system.

He makes the point, for example, that our faith in consciousness is just an accident. If instead of saying "In think," we were to say "Thinking is going on in my body," then we would not be tempted to give the "In," some independent existence, (e.g., in the mind) and make large claims about the ego or the inner. The reason we do search for such an entity stem from the accidental construction of our language, which encourages us to use a subject (the personal pronoun) and a verb. The same false confidence in language also makes it easy for us to think that we know clearly what key things like "thinking" and "willing" are; Whereas, if we were to engage in even a little reflection, we would quickly realize that the inner processes neatly summed up by these apparently clear terms is anything but clear. His emphasis on the importance of psychology as queen of the sciences underscores his sense of how we need to understand more fully just how complex these activities are, particularly the emotional appetites, before we talk about them so simplistically, the philosophers that concurrently have most recently done.

This remarkable insight enables Nietzsche, for example, at one blow and with cutting contempt devastatingly to dismiss as "trivial" the system Descartes had set up so carefully in the Meditations. Descartes's triviality consists in failing to recognize how the language he imprisons, shapes his philosophical system as an educated European, using and by his facile treatment of what thinking is in the first place. The famous Cartesian dualism is not a central philosophical problem but an accidental by-product of grammar designed to serve Descartes' own particular psychological needs. Similarly Kant's discovery of "new faculties" Nietzsche derides as just a trick of language - a way of providing what looks like an explanation and is, in fact, as ridiculous as the old notions about medicines putting people to sleep because they have the sleeping virtue.

It should be clear from examples like this (and the others throughout), which there is very little capability of surviving Nietzsche's onslaught, for what are there to which we can points to which did not have a history or deliver it to us in a historically developing system of language? After all, our scientific enquiries in all areas of human experience teach us that nothing is ever, for everything is always becoming.

Nietzsche had written that with repression of instincts and their turn inward, ‘the entire inner worlds, originally as thin as if it were stretched between two membranes, expanded and extended it, acquired depth, breadth, and heighten the same writing of a ’bad conscience’ . . . [as] the womb of all ideal nd imaginative phenomena . . . an abundance of strange new beauty and affirmation and perhaps beauty it.

The developments in the finding of an original libidinal cathexis of the ego, from which some is later given off to object but fundamentally persists and is related to the object-cathexes much as the body of an amoeba is related to the pseudopodia in which it puts out.

The development of the ego consists in a departure from the primary narcissism and result in a vigorous attempt to recover that state. This departure is caused by means of the displacement of the libido onto an ego-ideal imposed from without, and satisfaction is caused from fulfilling this ideal.

While the ego has sent out the libidinal object-cathexes, it becomes impoverished in favour of these cathexes’ it again from its satisfactions in respect of the object, just as it does by fulfilling its ideal.

Freud considers the implications of such finds for his dual instinct Theory that divides instincts into the duality of ego instinct and libidinal instincts. Freud questions this division, but does not definitely abandon it, which he will do in beyond the Pleasure Principle.

As indicted, one of Freud’s important points is that the ego attempts to recover its state of primary narcissism. This is related to important themes we relate to others based on projections of idealized images we are loving ourselves and aggrandizing ourselves in activities that reflect contrary motivations.

As a mother gives to her child that of which she deprives her . . . is it does not clear that in [such] instances man loves something of him . . . more than something else of themselves . . . the inclinations for something (wishes, impulse, desire) is present in all [such] instances to give in to it, with all the consequences, are in any even not ‘unegoistic’.

As Freud is entering his study of the destructive instincts - the death instinct and its manifestations outward as aggression a well as its secondary turn back inward upon it - might wonder if Nietzsche, who had explored the vicissitude’s of aggression and was famous for his concept of will to power, was among the ‘all kinds of things’ Freud was reading. At least Freud clearly had the ‘recurrence of the same’ on his mind during his period, while pessimism and relevance on pleasure during this period. While Freud’s through release of or discharge of and decreases of tension have strong affinities with Schopenhauer, there is the comparatively different ‘pleasure ‘of Eros’.

One point to be made is that Nietzsche’s concept of the will to power was an attempt to go beyond the pleasure principle and beyond good and evil. A principle of which, as for Nietzsche the primary drives to ward-off its primitive and more sublimated manifestations. All the same, pain is an essential ingredient since it is not a state attained at the end of suffering but the process of overcoming it (as of obstacles and suffering) that the central factor in the experience of an increase of power and joy.

Freud writes of as no other kind or level of mastery, the binding of instinctual impulses that is a preparatory act. Although this binding and the replacement of primary process with the secondary process operate before and without necessary regard for ‘the development of unpleasure, the transformation occurs on behalf of the pleasure principle, the binding is the preparatory act that introduces and assures the dominance of the pleasure principle’ . . . The binding . . . [is] designed to preparatory excitement for its final elimination in the pleasure of discharge.

For the individual who suffers this repeated and frustrated effect of pleasure, it is not only the object of the past that cannot be recovered, nor the relation that cannot be restored or reconstructed. Nevertheless, it is time it that resists the human ill and proves is unyielding. Between pleasure and satisfaction, a prohibition or negation of pleasure is enacted which necessitates the endless repetition and proliferation of thwarted pleasures. The repetition is a vain effort to stay, or to reverse time, such repetition reveals a rancor against the present that feeds upon it.

However at this point we might be tempted, as many have been, to point to the new natural science as a counter-instance that typifies the dulling of natural science of a progressive realization of the truth of the world, or at least a closer and closer approximation to that truth? In fact, it is interesting to think about just how closely Kuhn and Nietzsche might be linked in their views about the relationship between science and the truth of things or to what extent modern science might not provide the most promising refutation of Nietzsche's assertion that there is no privileged access to a final truth of things (a hotly disputed topic in the last decade or more). It tells us say here that for Nietzsche science is just another "foreground" way of interpreting nature. It has no privileged access to the Truth, although he does concede that, compared with other beliefs, it has the advantage of being based on sense experience and therefore is more useful for modern times.

There is one important point to stress in this review of the critical power of Nietzsche's project. Noting that Nietzsche is not calling us to a task for having beliefs is essential. We have to have beliefs. Human life must be the affirmation of values; Otherwise, it is not life. Nonetheless, Nietzsche is centrally concerned to mock us for believing that our belief systems are True, are fixed, are somehow eternally right by a grounded standard of knowledge. Human life, in its highest forms, must be lived in the full acceptance that the values we create for ourselves are fictions. We, or the best of us, have to have the courage to face the fact that there is no "Truth" upon which to ground anything in which we believe; we must in the full view of that harsh insight, but affirm ourselves with joy. The Truth is not accessible to our attempts at discovery; What thinking human beings characteristically do, in their pursuit of the Truth, is creating their own truths.

Now, this last point, like the others, has profound implications for how we think of ourselves, for our conception of the human. Because human individuals, like human cultures, also have a history. Each of us has a personal history, and thus we ourselves cannot be defined; we, too, are in a constant process of becoming, of transcending the person we have been into something new. We may like to think of ourselves as defined by some essential rational quality, but in fact we are not. In stressing this, of course, Nietzsche links him with certain strains of Romanticism, especially (from the point of view of our curriculum) with William Blake.

This tradition of Romanticism holds up a view of life that is radically individualistic, -created, - generated. "In must create my own system or become enslaved by another man's" Blake wrote. It is also thoroughly aristocratic, with little room for traditional altruism, charity, or egalitarianism. Our lives to realize their highest potential should be lived in solitude from others, except perhaps those few we recognize as kindred souls, and our life's efforts must be a spiritually demanding but joyful affirmation of the process by which we maintain the vital development of our imaginative conceptions of ourselves.

Contrasting this view of a constantly developing entity might be appropriate, but without essential permanence, with Marx's view. Marx, too, insists on the process of transformation of ideas but for him, the material forces control the transformation of production, and these in turn are driven by the logic of history. It is not something that the individual takes charge of by an act of individual will, because individual consciousness, like everything else, emerges form and is dependent upon the particular historical and material circumstances, the stage in the development of production, of the social environment in which the individual finds him or her.

Nietzsche, like Marx, and unlike later Existentialists, de Beauvoir, for example, recognizes that the individual inherits particular things from the historical moment of the culture (e.g., the prevailing ideas and, particularly, the language and ruling metaphors). Thus, for Nietzsche the individual is not totally free of all context. However, the appropriate response to this is not, as in Marx, the development of class consciousness, a solidarity with other citizens and an imperative to help history along by committing one to the class war alongside other proletarians, but in the best and brightest spirits, a call for a heightened sense of an individuality, of one's radical separation from the herd, of one's final responsibility to one's own most fecund creativity.

Because Nietzsche and the earlier Romantics are not simply saying, we should do what we like is vital. They all have a sense that - creation of the sort they recommend requires immense spiritual and emotional discipline - the discipline of the artist shaping his most important original creation following the stringent demands of his creative imagination. These demands may not be rational, but they are not permissively relativistic in that 1960's sense ("If it feels good, do it"). Permissiveness may have often been attributed to this Romantic tradition, a sort of 1960's “shop til you drop" ethic, but that is not what any of them had in mind. For Nietzsche that would simply be a herd response to a popularized and bastardized version of a much higher call to a solitary life lived with the most intense but personal joy, suffering, insight, courage, and imaginative discipline.

This aspect of Nietzsche's thought represents the fullest nineteenth-century European affirmation of a Romantic vision of the as radically individualistic (at the opposite end of the spectrum from Marx's views of the social and economically determined). A profound and lasting effect in the twentieth century as we become ever more uncertain about coherent social identities and thus increasingly inclined to look for some personal way to take full charge of our own identities without answering to anyone but ourselves.

Much of the energy and much of the humour in Nietzsche's prose comes from the urgency with which he sees such creative - affirmation as essential if the human species is not going to continue to degenerate. For Nietzsche, human beings are, primarily, biological creatures with certain instinctual drives. The best forms of humanity are those of whom most excellently express the most important of these biological drives, the "will to power," by which he means the individual will to assume of one and create what he or she needs, to live most fully. Such a "will to power" is beyond morality, because it does not answer to anyone's system of what makes up good and bad conduct. The best and strongest human beings are those of whom create a better quality in values for themselves, live by them, and refuse to acknowledge their common links with anyone else, other than other strong people who do the same and are thus their peers.

His surveys of world history have convinced Nietzsche that the development of systems has turned this basic human drive against human beings of morality favouring the weak, the suffering, the sick, the criminal, and the incompetent (all of whom he lumps together in that famous phrase "the herd"). He salutes the genius of those who could accomplish this feat (especially the Jews and Christians), which he sees as the revenge of the slaves against their natural masters. From this century - long acts of revenge, human beings are now filled with feelings of guilt, inadequacy, jealousy, and mediocrity, a condition alleviated, if at all, by dreams of being helpful to others and of an ever-expanding democracy, an agenda powerfully served by modern science (which serves to bring everything and everyone down to the same level). Fortunately, however, this ordeal has trained our minds splendidly, so that the best and brightest (the new philosophers, the free spirits) can move beyond the traditional boundaries of morality, that is, "beyond good and evil" (his favourite metaphor for this condition is the tensely arched bow ready to shoot off an arrow).

Stressing it is important, which upon Nietzsche does not believe that becoming such a "philosopher of the future" is easy or for everyone. It is, by contrast, an extraordinarily demanding call, and those few capable of responding to it might have to live solitary lives without recognition of any sort. He is demanding an intense spiritual and intellectual discipline that will enable the new spirit to move into territory no philosopher has ever roamed before, a displacing medium where there are no comfortable moral resting places and where the individual will probably (almost unquestionably) has to pursue of a profoundly lonely and perhaps dangerous existence (so the importance of another favourite metaphor of his, the mask). Nevertheless, this is the only way we can counter the increasing degeneration of European man into a practical, democratic, technocratic, altruistic herd animal.

By way of a further introduction to Nietzsche's Beyond Good and Evil, it would only offer an extended analogy, Still, to extend some remarks into directions that have not yet been explored.

Before placing the analogy on the table, however, In wish to issue a caveat. Analogies may really help to clarify, but they can also influence us by some unduly persuasive influences of misleading proportions. In hope that the analogy In offer will provide such clarity, but not at the price of oversimplifying. So, as you listen to this analogy, you need to address the questions: To what extent does this analogy not hold? To what extent does it reduce the complexity of what Nietzsche is saying into a simpler form?

The analogy put to put on the table is the comparison of human culture to a huge recreational complex in which several different games are going on. Outside people are playing soccer on one field, rugby on another, American football on another, and Australian football on another, and so on. In the club house different groups of people are playing chess, dominoes, poker, and so on. There are coaches, spectators, trainers, and managers involved in each game. Surrounding the recreation complex is wilderness.

These games we might use to characterize different cultural groups: French Catholics, German Protestants, scientists, Enlightenment rationalists, European socialists, liberal humanitarians, American democrats, free thinkers, or what possesses you. The variety represents the rich diversity of intellectual, ethnic, political, and other activities.

The situation is not static of course. Some games have far fewer players and fans, and the popularity is shrinking; Some are gaining popularity rapidly and increasingly taking over parts of the territory available. Thus, the traditional sport of Aboriginal lacrosse is but a small remnant of what it was before contact. However, the Democratic capitalist game of baseball is growing exponentially, as is the materialistic science game of archery. They might combine their efforts to create a new game or merge their leagues.

When Nietzsche looks at Europe historically, what he sees is that different games have been going on like this for centuries. He further sees that many participants in anyone game has been aggressively convinced that their game is the "true" game, which it corresponds with the essence of games or is a close match to the wider game they imagine going on in the natural world, in the wilderness beyond the playing fields. So they have spent much time producing their rule books and coaches' manuals and making claims about how the principles of their game copy or reveal or approximate the laws of nature. This has promoted and still promotes a good deal of bad feeling and fierce arguments. Therefore, in addition anyone game it, within the group pursuing it there has always been all sorts of sub-games debating the nature of the activity, refining the rules, arguing over the correct version of the rule book or about how to educate the referees and coaches, and so on.

Nietzsche's first goal is to attack this dogmatic claim about the truth of the rules of any particular game. He does this, in part, by appealing to the tradition of historical scholarship that shows that these games are not eternally true, but have a history. Rugby began when a soccer player broke the rules and picked up the ball and ran with it. American football developed out of rugby and has changed and is still changing. Basketball had a precise origin that can be historically found.

Rule books are written in languages that have a history by people with a deep psychological point to prove: The games are an unconscious expression of the particular desires of inventive game’s people at a very particular historical moment; these rule writers are called Plato, Augustine, Socrates, Kant, Schopenhauer, Descartes, Galileo, and so on. For various reasons they believe, or claim to believe, that the rules they come up with reveals something about the world beyond the playing field and are therefore "true" in a way that other rule books are not; they have, as it was, privileged access to reality and thus record, to use a favourite metaphor of Nietzsche's, the text of the wilderness.

In attacking such claims, Nietzsche points out, the wilderness bears no relationship at all to any human invention like a rule book; He points out that nature is "wasteful beyond measure, without purposes and consideration, without mercy and justice, fertile and desolate and uncertain simultaneously: Imagine malaise of its power - how could you live according to this indifference. Living-is that not precisely wanting to be other than this nature.” Because there is no connection with what nature truly is, such rule books are mere "foreground" pictures, fictions dreamed up, reinforced, altered, and discarded for contingent historical reasons. Moreover, the rule manuals often bear a suspicious resemblance to the rules of grammar of a culture, thus, for example, the notion of an ego as a thinking subject, Nietzsche points out, is closely tied to the rules of European languages that insist on a subject and verb construction as an essential part of any statement.

So how do we know what we have is the truth? Why do we want the truth, anyway? People seem to need to believe that their games are true, but why? Might they not be better if they accepted that their games were false, were fictions, deal with the reality of nature beyond the recreational complex? If they understood the fact that everything they believe in has a history and that, as he says in the Genealogy of Morals, "only that which has no history can be defined," they would understand that all this proud history of searching for the truth is something quite different from what philosophers who have written rule books proclaim.

Furthermore these historical changes and developments occur accidentally, for contingent reasons, and have nothing to do with the games, or anyone game, shaping it according to any ultimate game or any given rule book of games given by the wilderness, which is indifferent to what is going on. There is no basis for the belief that, if we look at the history of the development of these games, we discover some progressive evolution of games toward some higher type. We may be able, like Darwin, to trace historical genealogies, to construct a narrative, but that narrative does not reveal any clear direction or any final goal or any progressive development. The genealogy of games suggests that history be a record of contingent change. The assertion that there is such a thing as progress is simply another game, another rule added by inventive minds (who need to believe in progress); it bears no relationship to nature beyond the sports complex.

While one is playing on a team, one follows the rules and thus has a sense of what form right and wrong or good and evil conduct in the game. All those carrying out the same endeavour share this awareness. To pick up the ball in soccer is evil (unless you are the goalie), and to punt the ball while running in American football is permissible but stupid; in Australian football both actions are essential and right. In other words, different cultural communities have different standards of right and wrong conduct. The artificial inventions have determined these called rule books, one for each game. These rule books have developed the rules historically; Thus, they have no permanent status and no claim to privileged access.

Now, at this point you might be thinking about the other occasion in which of Aristotle's Ethics, acknowledges that different political systems have different rules of conduct. Still, Aristotle believes that an examination of different political communities will enable one to derive certain principles common to them all, bottom-up generalizations that will then provide the basis for reliable rational judgment on which game is being played better, on what was good play in any particular game, on whether or not a particular game is being conducted well or not.

In other words, Aristotle maintains that there is a way of discovering and appealing to some authority outside any particular game to adjudicate moral and knowledge claims that arise in particular games or in conflicts between different games. Plato, of course, also believed in the existence of such a standard, but proposed a different route to discovering it.

Now Nietzsche emphatically denies this possibility. Anyone who tries to do what Aristotle recommends is simply inventing another game (we can call it Super-sport) and is not discovering anything true about the real nature of games because they do not organize reality (that has the wilderness surrounding us) as a game. In fact, he argues, that we have created this recreational complex and all the activities that go on in it to protect themselves from nature (which is indifferent to what we do with our lives), not to copy some recreational rule book that wilderness reveals. Human culture exists as an affirmation of our opposition or to contrast with nature, not as an extension of rules that include both human culture and nature. That is why falsehoods about nature might be a lot more useful than truths, if they enable us to live more fully human lives.

If we think of the wilderness as a text about reality, as the truth about nature, then, Nietzsche claims, we have no access at all to that text. What we do have accessed to conflicting interpretations, none of them based on privileged access to a "true" text. Thus, the soccer players may think them and their game is superior to rugby and the rugby players, because soccer more closely represents the surrounding wilderness, but such statements about better and worse are irrelevant. There is nothing a rule bound outside the games themselves. Therefore, all dogmatic claims about the truth of all games or any particular game are false.

Now, how did this situation come about? Well, there was a time when all Europeans played almost the same game and had done so for many years. Having little-to-no historical knowledge and sharing the same head coach in the Vatican and the same rule book, they believed that the game was the only one possible and had been around for ever. So they naturally believed that their game was true. They shored up that belief with appeals to scripture or to eternal forms, or universal principles or to rationality or science or whatever. There were many quarrels about the nature of ultimate truth, that is, about just how one should tinker with the rule book, about what provided access to God's rules, but there was agreement that such excess must exist.

Take, for example, the offside rule in soccer. Without that the game could not continue in its traditional way. Therefore, soccer players see the offside rule as an essential part of their reality, and since soccer is the only game in town and we have no idea of its history (which might, for example, tell us about the invention of the off-side rule), then the offside rule is easy to interpret as a universal, a requirement for social activity, and we will find and endorse scriptural texts that reinforce that belief. Our scientists will devote their time to linking the offside rule with the mysterious rumblings that come from the forest. From this, one might be led to conclude that the offside rule is a Law of Nature, something that extends far beyond the realms of our particular game into all possible games and, beyond those, into the realm of the wilderness it.

Of course, there were powerful social and political forces (the coach and trainers and owners of the team) who made sure that people had lots of reasons for believing in the unchanging verity of present arrangements. So it is not surprising that we find plenty of learned books, training manuals, and locker room exhortations urging everyone to remember the offside rule and to castigate as "bad" those who routinely forget that part of the game. We will also worship those who died in defence of the offside rule. Naturally any new game that did not recognize the offside rule would be a bad game, an immoral way to conduct one. So if some group tried to start a game with a different offside rule, that group would be attacked because they had violated a rule of nature and were thus immoral.

However, for contingent historical reasons, Nietzsche argues, that situation of one game in town did not last. The recreational unity of the area divides the developments in historical scholarships into past demonstrations, in that all too clearly there is an overwhelming amount of evidence that all the various attempts to show that one specific game was exempted over any of all other true games, as they are false, dogmatic, trivial, deceiving, and so on.

For science has revealed that the notion of a necessary connection between the rules of any game and the wider purposes of the wilderness is simply an ungrounded assertion. There is no way in which we can make the connections between the historically derived fictions in the rule book and the mysterious and ultimately unknowable directions of irrational nature. To conform of science, we have to believe in causes and effects, but there is no way we can prove that this is a true belief and there is a danger for us if we simply ignore that fact. Therefore, we cannot prove a link between the game and anything outside it. History has shown us, just as Darwin's natural history has proved, that all apparently eternal issues have a story, a line of development, a genealogy. Thus, notions, like species, have no reality-they are temporary fiction imposed for the sake of defending a particular arrangement.

So, God is dead. There is no eternal truth anymore, no rule book in the sky, no ultimate referee or international Olympic committee chair. Nietzsche did not kill God; History and the new science did. Nietzsche is only the most passionate and irritating messenger, announcing over the intercom system to anyone who will listen that an appeal to a system can defend someone like Kant or Descartes or Newton who thinks that what he or she is doing grounded in the truth of nature has simply been mistaken.

This insight is obvious to Nietzsche, and he is troubled that no one is worried about it or even to have noticed it. So he has moved to call the matter to our attention as stridently as possible, because he thinks that this realization requires a fundamental shift in how we live our lives.

For Nietzsche Europe is in crisis. It has a growing power to make life comfortable and an enormous energy. However, people seem to want to channel that energy into arguing about what amounts to competing fictions and to force everyone to follow particular fictions.

Why is this insight so worrying? Well, one point is that dogmatists get aggressive. Soccer players and rugby players who forget what Nietzsche is pointing out can start killing each other over questions that admit of no answer, namely, question about which group has the true game, which ordering has a privileged accountability to the truth. Nietzsche senses that dogmatism is going to lead to warfare, and he predicts that the twentieth century will see an unparalleled extension of warfare in the name of competing dogmatic truths. Part of his project is to wake up the people who are intelligent enough to respond to what he is talking about so that they can recognize the stupidity of killing each other for an illusion that they misunderstand for some "truth."

Besides that, Nietzsche, like Mill (although, in a very different way), is seriously concerned about the possibilities for human excellence in a culture where the herd mentality is taking over, where Europe is developing into competing herds - a situation that is either sweeping up the best and the brightest or stifling them entirely. Nietzsche, like Mill and the ancient pre-

Socratic Greeks to whom he constantly refers, is an elitist. He wants the potential for individual human excellence to be liberated from the harnesses of conformity and group competition and conventional morality. Otherwise, human beings are going to become destructive, lazy, conforming herd animals, using technology to divert them from the greatest joys in life, which come only from individual striving and creativity, activities that require one to release one's instincts without keeping them eternally subjugated to controlling historical consciousness or a conventional morality of good and evil.

What makes this particularly a problem for Nietzsche is that he sees that a certain form of game is gaining popularity: Democratic volleyball. In this game, the rule book insists that all players be treated equally, that there be no natural authority given to the best players or to those who understand the nature of quality play. Therefore the mass of inferior players is taking over, the quality of the play is deteriorating, and there are fewer and fewer good volleyball players. This process is being encouraged both by the traditional ethic of "help your neighbour," now often in a socialist uniform and by modern science. As the mass of more many inferior players takes over the sport, the mindless violence of their desires to attack other players and take over their games increases, as does their hostility to those who are uniquely excellent (who may need a mask to prevent themselves being recognized).

The hopes for any change in this development are not good. In fact, things might be getting worse. For when Nietzsche looks at all these games going on he notices certain groups of people, and the prospect is not totally reassuring.

First there remain the overwhelming majority of people: the players and the spectators, those caught up in their particular sport. These people are, for the most part, continuing as before without reflecting or caring about what they do. They may be vaguely troubled about rumours they hear that their game is not the best, they may be bored with the endless repetition in the schedule, and they have essentially reconciled them that they are not the only game going on, but they had rather not thought about it. Or else, stupidly confident that what they are doing is what really matters about human life, is true, they preoccupy themselves with tinkering with the rules, using the new technology to get better balls, more comfortable seats, louder whistles, more brightly painted side lines, more trendy uniforms, tastier Gatorade - all in the name of progress.

Increasing numbers of people are moving into the stands or participating through the newspaper or the television sets. Most people are thus, in increasing numbers, losing touch with themselves and their potential as instinctual human beings. They are the herd, the last men, preoccupied with the trivial, unreflectingly conformist because they think, to the extent they think at all, that what they do will bring them something called "happiness." Yet they are not happy: They are in a permanent state of narcotized anxiety, seeking new ways to entertain themselves with the steady stream of marketed distractions that the forces of the market produce: Technological toys, popular entertainment, college education, Wagner's operas, academic jargon.

This group, of course, includes all the experts in the game, the cheerleaders whose job it is to keep us focussed on the seriousness of the activity, the sports commentators and pundits, whose life is bound up with interpreting, reporting, and classifying players and contests. These sportscasters are, in effect, the academics and government experts, the John Maddens and Larry Kings and Mike Wallaces of society, those demigods of the herd, whose authority derives from the false notion that what they are dealing with is something other than a social-fiction.

There is a second group of people, who have accepted the ultimate meaninglessness of the game in which they were. They have moved to the sidelines, not as spectators or fans, but as critics, as cynics or nihilists, dismissing out of hand all the pretensions of the players and fans, but not affirming anything themselves. These are the souls who, having nothing to will (because they have seen through the fiction of the game and have therefore no motive to play any more), prefer to will nothing in a state of paralysed skepticism. Nietzsche has a certain admiration for these people, but maintains that a life like this, the nihilist on the sidelines, is not a human life.

For, Nietzsche insists, to live as a human being, is to play a game. Only in playing a game can one affirm one's identity, can one create values, can one truly exist. Games are the expression of our instinctual human energies, our living drives, what Nietzsche calls our "will to power." So the nihilistic stance, though understandable and, in a sense, courageous, is sterile. For we are born to play, and if we do not, then we are not fulfilling a worthy human function. Also, we have to recognize that all games are equally fictions, invented human constructions without any connections to the reality of things.

So we arrive at the position of the need to affirm a belief (invent a rule book) which we know to have been invented, to be divorced from the truth of things. To play the best game is to live by rules that we invent for ourselves as an assertion of our instinctual drives and to accept that the rules are fictions: they matter, we accept them as binding, we judge ourselves and others by them, and yet we know they are artificial. Just as in real life a normal soccer player derives a sense of meaning during the game, affirms his or her value in the game, without ever once believing that the rules of soccer have organized the universe or that those rules have any universal validity, so we must commit ourselves to epistemological and moral rules that enable us to live our lives as players, while simultaneously recognizing that these rules have no universal validity.

The nihilists have discovered half this insight, but, because they cannot live the full awareness, they are very limited human beings.

The third group of people, that small minority that includes Nietzsche himself, who of which are those who accept the game’s metaphor, see the fictive nature of all systems of knowledge and morality, and accept the challenge that to be most fully human is to create a new game, to live a life governed by rules imposed by the dictates of one's own creative nature. To base one's life on the creative tensions of the artist engaged with creating a game that meets most eloquently and uncompromisingly the demand of one's own irrational nature-one's wish-is to be most fully free, most fully human.

This call to live the -created life, affirming one in a game of one's own devising, necessarily condemns the highest spirits to loneliness, doubt, insecurity, emotional suffering, because most people will mock the new game or be actively hostile to it or refuse to notice it, and so on; Alternatively, they will accept the challenge but misinterpret what it means and settle for some marketed easy game, like floating down the Mississippi smoking a pipe. Nevertheless, a generated game also brings with-it the most intense joy, the most playful and creative affirmation of what is most important in our human nature.

Noting here that one’s freedom to create is important one's own game is limited. In that sense, Nietzsche is no existentialist maintaining that we have a duty and an unlimited freedom to be whatever we want to be. For the resources at our disposable parts of the field still available and the recreational material lying around in the club house-are determined by the present state of our culture. Furthermore, the rules In devise and the language In frame them in will ordinarily owe a good deal to the present state of the rules of other games and the state of the language in which those are expressed. Although in changing the rules of my game, let it be known that my starting point, or the rules have the availability to change, and are given to me by my moment in history. So in moving forward, in creating something that will transcend the past, In am using the materials of the past. Existing games are the materials out of which In fashion my new game.

Thus, the new philosopher will transcend the limitations of the existing games and will extend the catalogue of games with the invention of new ones, but that new creative spirit faces certain historical limitations. If this is relativistic, it is not totally so.

The value of this endeavour is not to be measured by what other people think of the newly created game; Nor does its value lie in fame, material rewards, or service to the group. Its value comes from the way it enables the individual to manifest certain human qualities, especially the will to power. Nonetheless, it seems that whether or not the game attracts other people and becomes a permanent fixture on the sporting calendar, something later citizens can derive enjoyment from or even remember, that is irrelevant. For only the accidents of history determination of whether the game invented is for my-own attractions in other people, that is, becomes a source of value for them.

Nietzsche claims that the time is right for such a radically individualistic endeavour to create new games, new metaphors for my life. For, wrongheaded as many traditional games may have been, like Plato's metaphysical soccer or Kant's version of eight balls, or Marx's materialist chess tournament, or Christianity's stoical snakes and ladders, they have splendidly trained us for the much more difficult work of creating values in a spirit of radical uncertainty. The exertions have trained our imaginations and intelligence in useful ways. So, although those dogmatists were unsound, an immersion in their systems has done much to refine those capacities we most need to rise above the nihilists and the herd.

Now, putting its analogy on the table for our consecrations to consider and clarify by some central points about Nietzsche. However, the metaphor is not so arbitrary as it may appear, because this very notion of systems of meanings as invented games is a central metaphor of the twentieth century thought and those who insist upon it as often as not point to Nietzsche as their authority.

So, for example, when certain postmodernists insist that the major reason for engaging in artistic creativity or literary criticism or any form of cultural life be to awaken the spirit of creative play that is far more central than any traditional sense of meaning or rationality or even coherence, we can see the spirit of Nietzsche at work.

Earlier in this century, as we will see in the discussions of early modern art, a central concern was the possibility of recovering some sense of meaning or of recreating or discovering a sense of "truth" of the sort we had in earlier centuries. Marxists were determined to assist history in producing the true meaning toward which we were inexorably heading. To the extent that we can characterize post-modernism simply at all, we might say that it marks a turning away from such responses to the modern condition and an embrace, for better or worse, of Nietzsche, joyful -affirmation in a spirit of the irrationality of the world and the fictive qualities of all that we create to deal with life.

One group we can quickly identify is those who have embraced Nietzsche's critique, who appeal to his writing to endorse their view that the search to ground our knowledge and moral claims in Truth are futile, and that we must therefore recognize the imperative Nietzsche laid before us to -create our own lives, to come up with new -descriptions affirming the irrational basis of our individual humanity. This position has been loosely termed Antifoundationalism. Two of its most prominent and popular spokespersons in recent years have been Richard Rorty and Camille Paglia. Within Humanities departments the Deconstructionists (with Derrida as their guru) head the Nietzschean charge.

Antifoundationalists supportively link Nietzsche closely with Kuhn and with Dewey (whose essay on Darwin we read) and sometimes with Wittgenstein and take central aim at anyone who would claim that some form of enquiry, like science, rational ethics, Marxism, or traditional religion has any form of privileged access to reality or the truth. The political stance of the Antifoundationalists tends to be radically romantic or pragmatic. Since we cannot ground our faith in any public morality or political creed, politics becomes something far less important than personal development or else we have to conduct our political life simply on a pragmatic basis, following the rules we can agree on, without according those rules any universal status or grounding in eternal principles. If mechanistic science is something we find, for accidental reasons of history, something useful, then we will believe it for now. Thus, Galileo's system became adopted, not because it was true or closer to the truth that what it replaced, but simply because the vocabulary he introduced inside our descriptions was something we found agreeable and practically helpful. When it ceases to fulfill our pragmatic requirements, we will gradually change to another vocabulary, another metaphor, another version of a game. History shows that such a change will occur, but how and when it will take place or what the new vocabulary might be-these questions will be determined by the accidents of history.

Similarly, human rights are important, not because there is any rational non-circular proof that we ought to act according to these principles, but simply because we have agreed, for accidental historical reasons, that these principles are useful. Such pragmatic agreements are all we have for public life, because, as Nietzsche insists, we cannot justify any moral claims by appeals to the truth. So we can agree about a schedule for the various games and distributing the budget between them and we can, as a matter of convenience, set certain rules for our discussions, but only as a practical requirement of our historical situation, least of mention, not by any divine or rationality that of any system contributes of its distributive cause.

A second response is to reject the Antifoundationalist and Nietzschean claim that no language has privileged contact to the reality of things, to assert, that is, that Nietzsche is wrong in his critique of the Enlightenment. Plato's project is not dead, as Nietzsche claimed, but alive and well, especially in the scientific enterprise. We are discovering ever more about the nature of reality. There may still be a long way to go, and nature might be turning out to be much more complex than the early theories suggested, but we are making progress. By improving the rule book we will modify our games so that they more closely approximate the truth of the wilderness.

To many scientists, for example, the Antifoundationalist position is either irrelevant or just plain wrong, an indication that social scientists and humanity’s types do not understand the nature of science or are suffering a bad attack of sour grapes because of the prestige the scientific disciplines enjoy in the academy. The failure of the social scientists (after generations of trying) to come up with anything approaching a reliable law (like, say, Newton's laws of motion) has shown the pseudoscientific basis of the disciplines, and unmasks their turn to Nietzschean Antifoundationalism as a feeble attempt to justify their presence in the modern research university.

Similarly, Marxists would reject Antifoundationalism as a remnant of aristocratic bourgeois capitalism, an ideology designed to take intellectuals' minds off the realities of history, the truth of things. There is a truth grounded in a materialist view of history, fostering, that only in diverting philosophers away from social injustice. No wonder the most ardent Nietzscheans in the university have no trouble getting support from the big corporate interests to and their bureaucratic subordinates: The Ford Foundation, and the National Endowment for the Humanities. Within the universities and many humanities and legal journals, some liveliest debates go on between the Antifoundationalists allied and the Deconstructionists under the banner of Nietzsche and the historical materialists and many feminists under the banner of Marx.

Meanwhile, there has been a revival of interest in Aristotle. The neo-Aristotelians agree with Nietzsche's critique of the Enlightenment rational project-that we are never going to be able to derive a sense of human purpose from scientific reason - but assert that sources of value and knowledge are not simply a contingent but arise from communities and that what we need to sort out our moral confusion is a reassertion of Aristotle's emphasis on human beings, not as radically individual with an identity before their political and social environment, but moderate political animals, whose purpose and value are deeply and essentially rooted in their community. A leading representative for this position is Alisdair McIntyre.

Opposing such a communitarian emphasis, a good deal of the modern Liberal tradition points out that such a revival of traditions simply will not work. The break down of the traditional communities and the widespread perception of the endemic injustice of inherited ways is something that cannot be reversed (appeals to Hobbes here are common). So we need to place our faith in the rational liberal Enlightenment tradition, and look for universal rational principles, human rights, rules of international morality, justice based on an analysis of the social contract, and so on. An important recent example such a view is Rawls' famous book Social Justice.

Finally, there are those who again agree with Nietzsche's analysis of the Enlightenment and thus reject the optimistic hopes of rational progress, but who deny Nietzsche's proffered solution. To see life as irrational chaos that we must embrace and such joyous affirmation as the value-generating activity in our human lives, while recognizing its ultimate meaninglessness to the individual, too many people seem like a prescription for insanity. What we, as human beings, must have to live a fulfilled human life is an image of eternal meaning. This we can derive only from religion, which provides for us, as it always has, a transcendent sense of order, something that answers to our essential human nature far more deeply than either the Enlightenment faith in scientific rationality or Nietzsche's call to a life of constantly metaphorical -definition.

To read the modern debates over literary interpretation, legal theory, human rights issues, education curriculums, feminist issues, ethnic rights, communitarian politics, or a host of other similar issues is to come repeatedly across the clash of these different positions (and others). To use the analogy In started with, activities on the playing fields are going on more energetically than ever. Right in the middle of most of these debates and generously scattered throughout the footnotes and bibliographies, Nietzsche's writings are alive and well. To that extent, his ideas are still something to be reckoned with. He may have started by shouting over the loud speaker system, in a way no to which one bothered attending; now on many playing fields, the participants and fans are considering and reacting to his analysis of their activities. So Nietzsche today is, probably more than ever before in this century, right in the centre of some vital debates over cultural questions.

You may recall how, in Book X of the Republic, Plato talks about the "ancient war between poetry and philosophy." What this seems to mean from the argument is an ongoing antagonism between different uses of language, between language that seeks above all, denotative clarity the language of exact definitions and precise logical relationships and language whose major quality is its ambiguous emotional richness, between, that is, the language of geometry and the language of poetry (or, simply put, between Euclid and Homer)

Another way of characterizing this dichotomy is to describe it as the intensive force between a language appropriates and discovering the truth and one appropriate to creating it, between, that is, a language that sets it up as an exact description of a given order (or as exactly presently available) and a language that sets it up as an ambiguous poetic vision of or an analogy to a natural or cosmic order.

Plato, in much of what we studied, seems clearly committed to a language of the former sort. Central to his course of studies that will produce guardian rulers is mathematics, which is based upon the most exact denotative language we know. Therefore, the famous inscription over the door of the Academy: "Let no one enter here who has not studied geometry." Underlying Plato's remarkable suspicion of a great deal of poetry, and particularly of Homer, is this attitude to language: Poetic language is suspect because, being based on metaphors (figurative comparisons or word pictures), it is a third remove from the truth. In addition, it speaks too strongly to the emotions and thus may unbalance the often tense equilibrium needed to keep the soul in a healthy state.

One needs to remember, however, that Plato's attitude to language is very ambiguous, because, in spite of his obvious endorsement of the language of philosophy and mathematics, in his own style he is often a poet, a creator of metaphor. In other words, there is a conflict between his strictures on metaphor and his adoption of so many metaphors (the central one of some dramatic dialogues is only the most obvious). Many famous and influential passages from the Republic, for example, are not arguments but poetic images or fictional narratives: The Allegory of the Cave, the image of the Sun, the Myth of Er.

Plato, in fact, has always struck me as someone who was deeply suspicious about poetry and metaphor because he responded to it so strongly. Underlying his sometimes harsh treatment of Homer may be the imagination of someone who is all too responsive to it (conversely, and Aristotle’s more lenient view of poetry may stem from the fact that he did not really feel its effects so strongly). If we were inclined to adopt Nietzsche's interpretation of philosophy, we might be tempted to see in Plato's treatment of Homer and his stress on the dangers of poetic language his own "confession" of weakness. His work is, in part, an attempt to fight his own strong inclinations to prefer metaphoric language.

Nietzsche is unchallenged as the most sightful and powerful critics of the moral climate of the 19th century (and of what remains in ours). His exploration of bringing forth an acknowledged unconscious motivation, and the conflict of opposing forces within the mindful purposes of possibilities of creative integration. . . . Freud and Nietzsche elaborate upon the whole field of libidinal economy: The transit of the libido through other selves, aggression, infliction and reception of pain, and something very much like death, the total evacuation of the entire quantum of excitation that the organism is charged.

Nietzsche suggests that in our concern for the other, in our sacrifice for the other, we are concerned with ourselves, one part of ourselves represented by the other. That for which we sacrifice ourselves is unconsciously related to as another part of us. In relating to the other we are in fact also relating to a part of ourselves and we are concerned with our own pleasure and pain and our own expression of will to power. In one analysis of pity, Nietzsche states that, “we are, to be sure, not consciously thinking of ourselves but are doing so strongly unconsciously.” He goes on to suggest that it be primarily our own pleasure and pain that we are concerned about and that the feelings and reactions that follow are multi-determined: “We never do anything of this kind out of one motive.”

The real world is flux and change for Nietzsche, but in his later works



there is no “unknowable true world.” Also, the splits between a surface, apparent world and an unknowable but a true world of the things-in-themselves were, as is well known, a view Nietzsche rejected. For one thing, as Mary Warnock points out, Nietzsche was attempting to get across the point that there is only one world, not two. She also suggests that for Nietzsche, if we contribute anything to the world, it be the idea of a “thing,” and in Nietzsche’s words, “the psychological origin of the belief in things forbids us to speak of things-in-themselves.”

Nietzsche holds that there is an extra-mental world to which we are related and with which we have some kind of fit. For him, even as knowledge develops in the service of -preservation and power, to be effective, a conception of reality will have a tendency to grasp (but only) a certain amount of, or aspect of, reality. However much of Nietzsche may at times (the truth of) artistic creation and dissimulation (out of chaos) as paradigmatic for science (which will not recognize it as such), in arriving art this position Nietzsche assumes the truth of scientifically based beliefs as foundation for many of his arguments, including those regarding the origin, development and nature of perception, consciousness and consciousness and what this entails for our knowledge of and falsification of the external and inner world. In fact, to some extent the form-providing, affirmative, this-world healing of art is a response to the terrifying, nausea-inducing truths revealed by science that by it had no treatment for the underlying cause of the nausea. Although Nietzsche also writes of the horrifying existential truths, against which science can attempt a [falsifying] defence. Nevertheless, while there is a real world to which we are affiliated, there is no sensible way to speak of a nature or constitution or eternal essence of the world in and of it apart from description and perceptive. Also, states of affairs to which our interpretations are to fit are established within human perspectives and reflect (but not only) our interests, concerns, needs for calculability. While such relations (and perhaps as meta-commentary on the grounds of our knowing) Nietzsche is quite willing to write of the truth, the constitution of reality, and facts of the case. There appears of no restricted will to power, nor the privilege of absolute truth. To expect a pure desire for a pure truth is to expect an impossible desire for an illusory ideal.

The inarticulate come to rule supreme in oblivion, either in the individual’s forgetfulness or in those long stretches of the collective past that have never been and will never be called forth into the necessarily incomplete articulations of history, the record of human existence that is profusely interspersed with dark passages. This accounts for the continuous questing of archeology, palaeontology, anthropology, geology, and accounts, too, for Nietzsche’s warning against the “insomnia” of historicisms. As for the individual, the same drive is behind the modern fascination with the unconscious and, thus, with dreams, and it was Nietzsche who, before Freud, spoke of forgetting as an activity of the mind. At the beginning of his, Genealogy of Morals, he claims, in defiance of all psychological “shallowness,” that the lacunae of memory are not merely “passive” but the outcome of an active and positive “screening,” preventing us from remembering what would upset our equilibrium. Nietzsche is the first discoverer of successful “repression,” the burying of potential experience in the unarticulated that is, as moderately when the enemy territory is for him.

Still, he is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes once it is denied of its basis in action, and the corruptions of human nature encouraged by religions, such as Christianity, that feed on such resentment. Yet the powerful human being who escapes all this, the ‘Übermensch’, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style of his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist than the world warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave him no words to condemn any uncaged beast of prey who vests finds their style by exerting repulsive power over others. Nietzsche’s frequently expressed misogyny does not help this problem, although in such matters the interpretation of his many-layered and ironic writing is not always straightforward. Similarly, such anti-Semitism, as found in his work is in an equally balanced way as intensified denouncements of anti-Semitism, and an equal or greater contempt of the German character of his time.

Nietzsche’s current influence derives not only from his celebration of the will, but more deeply from his scepticism about the notions of truth and fact. In particular, he anticipated many central tenets of postmodernism: An aesthetic attitude toward the world that sees it as a ‘text’, the denial of facts: The denial of essences, the celebration of the plurality of interpretations and of the fragmented and political discourse all for which are waiting their rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his perspectives are echoed in the shifting array of literary devices - humour, irony, exaggeration, aphorisms, verse, dialogue, parody with which he explores human life and history.

All the same, Nietzsche is openly pessimistic about the possibility of knowledge: ‘We simply lack any organ for knowledge, for ‘truth’: We ‘know’ (or believe or imagine) just as much as may be useful in the interests of the human herd, the species, and perhaps precisely that most calamitous stupidity of which we shall perish some day’ (The Gay Science).

This position is very radical for Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification: He writes that we think truth with usefulness: he writes that we think we know what we think is useful, and that we can be quit e wrong about the latter.

Nietzsche’s view, his ‘perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if they were to make up knowledge. He sums up this highly controversial position in The Will to Power: ‘Facts and precisely what there is not, only interpretations’.

It is often maintained that the affirmation within perspectivism is -undermined, in that if the thesis that all views are interpretations is true then, it is argued for, that a compound view is not an interpretation. If, on the other hand, the thesis is it an interpretation, perhaps, on that point is no reason to believe that it is true, and it follows again that not every view is an interpretation.

Nonetheless, this refutation assumes that if a view, as perspectivism it, is an interpretation, it is by that very fact wrong. This is not so, however, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the perspectivism is really false in producing another view superior to it that on specific epistemological grounds it is necessary.

Perspectivism does not deny that particular views can be true. Like some versions of contemporary anti-realism, it attributes to specific approaches’ truth in relation to facts themselves. Still, it refused to envisage a single independent set of facts, and accounted for by all theories. Thus, Nietzsche grants the truth of specific scientific theories: He does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’ (The Gay Science): The fact’s have to neither be addressed through science nor are the methods that serve the purposes for which they have been devise: Regardless, these have no priority over the many others’ purposes of human life.

The existence of many purposes and needs for which the measure of theoretic possibilities is established -other crucial elements evolving perspectivism is sometimes thought to imply of a prevailing-over upon relativism, according to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of a single set of standards for determining epistemic value. However, he holds that specific views can be compared with and evaluated in relation to one another. The ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in it. Agreement is therefore not always possible, since individuals may sometimes differ over the most fundamental issue dividing them.

Nonetheless, this fact would not trouble Nietzsche, which his opponents too also have to confront only, as he would argue, to suppress it by insisting on the hope that all disagreements are in principal eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal, but he considers irresoluble disagreement an essential part of human life.

Nature is the most apparent display of the will to power at work. It is wholly unconscious and acts solely out of necessity, such that no morality is involved. We are a part of this frightening chaos where anything can happen anytime. However, this requires far too much intelligence for us to realize and rightly accept it totally. So we invent reasons for things that have no reason. We believe in our own falsification of nature. We produce art, and delight in the perfection that is unnatural. All of the same time, we are to dwell along within nature, and, still, nature is fooling it. Nietzsche accentuates that of all human actions are remnant fragments of those yielding of an acceptable appearance corresponded by surrendering to some part of nature, is without much difficulty accomplished out of necessity. They are instincts we have developed for our own preservation. He believes the natural state is the best state, even in all its wantonness, and he calls people to open their ears to the purity of a nature without design. ‘The universe's music box repeats eternally its tune, which can never be called for as a melody’.

The Gay Science explains the problems with man humanizing nature. It is a fitting departure point because, through criticism, it states Nietzsche's regard for the unconsciousness of the will to power in nature. He fills this section with warning: “Let us beware of thinking that the universe is a living being,” he says. “Where should it expand?” “On what should it feed?” The universe lacks its own will to power. We can in no way identify with the universe, despite all our efforts.’We should not make it something essential, universal, and eternal’. Nietzsche is dispelling the notion that there is meaning in existence. He is saying that when all is said and done and gone, our universe does not matter. After all, it will destroy and create it into eternity. It does not have a purpose, like a machine. Humans seek honour in the universe and we find honour in spite of any purposive inclusion. So tricking ourselves is easy as we have become conceited into believing that we are the purposes of the universe. All the power in the universe working toward producing our species of mammals. Yet let us be reasonable. Nietzsche calls the organic an “exception of exceptions.” Where matter it is an exception: We are not the secret aim, but a byproduct of unusual circumstances. It is an error to assume that all of the space behaves in the manner of that which immediately surrounds us. We cannot be sure of this uniformity. Nietzsche uses our surrounding stars as an example. He asserts that stars may very well exist whose orbits are not at all what we suppose? ‘Let us beware of attributing it to heartlessness and unreason and their opposites’. There is no intent, as there is no such accident, because this requires a purpose. All these things are disguises man has given the universe. They are false, but why should we beware? Nietzsche emphasizes our weakness as animals. We are the only animals that live against our natural inclinations. By suppressing our instincts, we become less and less equipped to exist as part of nature. If we continue living against our surroundings, we will be removed, not out of God's anger, but out of necessity.

Nietzsche reminds us that the total character of the world is in all eternity chaos. The only structure responsible for the necessity that reigns in nature is the will to power. The will to power begins in chaos. We find it unpleasant to think of our lives in these terms, because, stronger than our urge to deify nature is usually our urge to deify ourselves. We are merely living things. Let us beware of saying that death is opposed to life. The living is merely a type of what is dead, and a very rare type. There is no opposition, only will to power. The living and the dead are both made of the same basic materials. The difference is that when something is alive, its molecules reproduce. Again, Nietzsche focuses on the exceptions.

When will all these shadows of God cease to darken our minds? When will we complete our de-veneration of nature? When may we begin to ‘naturalize’ humanity through a pure, newly discovered, newly redeemed nature?

Nietzsche divides human beings according to their creative power. The higher, creative humans see and hear more than the lower, who concern themselves with matters of man. This is a pattern found throughout nature: the higher animals experience more. In humans, the higher become at once happier and unhappier, because they are feeling more. Nietzsche calls these people the ‘poets’ who are creating the lives on stage, while the non-creative are exasperated ‘actors’. The actors could be better understood as spectators of the poets' performance. Poets’ think and feel harmoniously, matched with time; he is able continually to fashion something that did not previously exist. He created the entire world of valuations, colours, accents, perspectives, scales, affirmations, and negations' studied by the actor. In our society, the actor is called practical, when it is the poet who is responsible for any value we place. By this, Nietzsche means that, since nothing has any meaning or value by nature, the poets are responsible because they are the ones who produce beauty. They are responsible for everything in the world that concerns man. They fail to recognize this, however, and remain unaware of their best power. We are neither as proud nor as happy as we might be.

Our poets produce art. Art is the expression of perfect beauty that does not occur naturally. Human hands have given and conceived it by human minds; it is human nature. We are separate from the rest of the animal kingdom in our deviation from nature. Our instincts led us to delight in art as it distinguished it and its creators as supernatural. We must wonder at nature becoming bored with ourselves, to create something better and, perhaps, slightly as perfect than it can become. What does nature know of perfection? It is the will to power, but a facet only exhibited in humans. All the same, in that it seems that art is meant to be as far removed from everything as naturally possible. Nietzsche uses the Greeks as an example of this pure art. They did not want fear and pity. To prevent these human emotions from interfering in the presentation of a writer's work, the Greeks would confine actors to narrow stages and restrict their facial expressions. The object was beautiful speech, with the presentation only meant to do the words justice, not to distract with dramatic interpretation. A more modern example is the opera. Nietzsche points out the insignificance of the words versus the music. What is the loss in not understanding an opera singer? In the present, art has degenerated so that its purpose is often to remind us of our humanity, much less to express that which is perfect. We listen for words that shackle us to the land in a medium that can elevate us above the rest. Art gives human life reason, purpose, and all the things we have attributed from God, but Art is true. It is the only meaning in life, because it is unnatural.

An examining consideration as arranged of human autonomy has of it the designed particularity of interests, in that for Nietzsche conveys the predisposition for which it finds the preservation of the species. It is the oldest and strongest of our instincts, and it is the essence of the herd. Why should we care about the survival of our race? It is not in our interest as individuals. Yet we cannot avoid it. Nietzsche points out that even the most harmful men aid preservation by instilling instincts in the rest of us that a necessary for our survival. In that way they are largely responsible for it, but according to Nietzsche, we are no longer capable of ‘living badly’. That is, living in a way that goes against preservation of the human race: 'Above all, perish', you will contribute to humanity.

Nietzsche reflected back to the contributions achieved through the afforded endeavours attributively generated of the seventeenth and eighteenth centuries, and, by comparison and consistence in their according attitude with their sense of nature. The seventeenth century was a time when humans lived closer to their instincts. An artist of that time would attempt to capture all that he could in art, removing him as much as possible. In the eighteenth century, Nietzsche says that artists took the focus away from nature and put it on themselves. Art became social propaganda. It became more human. We are missing ‘the hatred of the lack of a sense of nature that was present in the seventeenth century. Nietzsche writes of the nineteenth century with hope. He says people have become more concrete, more fearless. They are putting the 'health of the body ahead of the health of the soul'. It would appear that they are making a return to nature, except that if they reach it, it will be for the first time.

‘There has never yet been of a foremost advance in natural humanity. That is a humanity living according to their nature. Nietzsche stresses that we have become far too unnatural in our social and technical evolution. Humans have tried to exist above nature, condemning their own world (in fact, themselves) as if they were the only ones alive by the grace of God. This is an obvious contradiction for Nietzsche. He believes first that, since human beings share the same relationship with nature as any other animal, we ought to live according to our instincts. We ought to do what comes naturally to us that which most reflects the will to power. He uses two examples of natural human behaviour (human instinct) to clarify. The first example is the search for knowledge, which is naturally occurring in all humans, whether they are conscious of it or not. The second is the way we perceive our rights in society. This is an example of humans living according to necessity. We act according to that which can be enforced. Punishment is our only deterrent. In that respect, we live naturally, but that is in resignation that we meet a superficial nature. If we could live in accord with that which governs our fellow creatures, we would discover our true selves, and realize our full potential as artists.

Nietzsche regards the evolution of human nature as a journey from the age of morality into the age of consciousness, or the age of ridicule. He saw humanity of his time still living according to teachers of remorse, their so-called ‘heroes.’ These men inspired our faith in life, and our fear of death. They gave us false reasons for our existence, disguising it with the invention of a ‘teacher of the purpose of existence’. Nietzsche believes that the idea of God came about because it distracted people from their insignificance. What does one person count for in relation to the whole of society? Nothing, the preservation of the species is all that matter. We are mammals like any other. If by force or some orderly enforcement was to gain power and leadership for themselves, the ‘heroes’ taught us that we were significant in the eyes of God. They taught us to take ourselves seriously; that life is worth living because of this 'teacher of purpose'. We cling to this safety blanket that protects us from seeing us at eye level with the rest of the natural world because we cannot handle the true nature of the human race as a herd.

Nietzsche predicts that humans will evolve to the point where they can comprehend the true nature of their species. This realization brings ridicule to our lives. Nothing has been meaning that it is all random functioning at the hands of the will to power. We can no longer be solemn in our work, for how are we to take ourselves seriously when we have given up our blanket? It seems that we had lost our direction to the madness of nature and reason. We must remember that unreason is also essential for the preservation of the species.

In his writings (Essays on Aesthetics, Untimely Meditations the Gay Science and others) Nietzsche wishes to be considered by his readers and viewed in and by history as a psychologist who practice’s psychology who asserts attention by unitizing contingently prescribed studies as a curriculum can quickly bind and serve as for his time must be accredited of it, embracing willfully a ‘new system for psychology?’

In fact, several authors view many expressions as voiced in the aspects of Nietzsche’s work, for instance, Kaufmann and Golomb, as psychological ones, a fact disregarded by many authors who regard Nietzsche as a mere anti philosopher and a writer of short beautiful verse. Surely, while being a young, frustrated, physically and mentally ill, retired professors of Philology, who has viciously attacked his colleagues, the state, society and the establishment and wrote provocative verses and notes, Nietzsche has also sought to bring the nature of man, the unconscious, the conscious, conscious, analysis, relationships with other individuals, the inner state (emotions, sensations, feelings and the like), irrational sources of man's power and greatness with his morbidity and -destructiveness into the scope of existence.

Further, in his many writings Nietzsche also talks of the mind, the mental, instincts, reflexes, reflexive movements, the brain, symbolic representations, images, views, metaphors, language, experiences, innate and hereditary psychological elements, defence, protective, mechanism, repression, suppression, overcoming, an overall battle, struggle and conflict between individuals etc., As an illustration, Nietzsche describes how blocked instinctual powers turn within the individual into resentment, -hatred, hostility and aggression. Moreover, Nietzsche strives to analyse human being, his crisis, his despair and his existence in the world and to find means to alleviate human crises and despair.

These aspects of Nietzsche's work elicit a tendency to compare Nietzsche's doctrine with that of Freud and psychoanalysis and to argue that the Freudian doctrine and school (the psychoanalytic theory of human personality on which the psychotherapeutic technique of psychoanalysis is based). Nietzsche’s has influenced and affected methods of treatment (psychoanalysis) by Nietzsche's philosophy and work and the Nietzschean doctrine. As a demonstration from the relevant literature, according to Golomb's (1987) thesis, the theoretical core of psychoanalysis is already part and parcel of Nietzsche's philosophy, insofar as it is based on ideas that are both displayed in it and developed by it - ideas such as the unconscious, repression, sublimation, the id, the superego, primary and secondary processes and interpretations of dreams.

Nonetheless, the actual situation in the domains of psychotherapy, psychiatry and clinical psychology are not in passing over, but collectively strict and well-set determination for each general standard aligns itself to an assailing mortality. While the two savants (Nietzsche and Freud) endeavour to understand man, to develop the healthy power that is still present in the individual and the neurotic patient to overcome and suppress the psychological boundaries that repress his vitality and inhibit his ability to function freely and creatively and attain truth, the difference between the psychodynamic school, approach, movement and method of treatment, usually psychoanalysis, in particular, and the existential approach to psychotherapy, the existential movement and the existential humanistic school of psychology and method of treatment stemmed from the doctrines and views of Freud. Nietzsche is profound and significant, for the actual psychotherapeutic treatment. The reason as for this difference lie in the variation in the two savants' view and definition of man and human existence, the nature and character of man and his relationship with the world and the environment, and in the variation in the intellectual soil, that nourished and nurtured the two giant savants' views, doctrines (that is, the scholarly academic savants' philosophical and historical roots and influences) and the manners according to which they have been devised and designed.

Before anything else, the question of Nietzsche's historical critique, as might that we will recall of how one featuring narrative has been drawn from the texts, was a rapidly developing interest in and used for the enormously powerful historical criticism developed by Enlightenment thinkers. It is a way of undermining the authority of traditional power structures and the fundamental beliefs that sustain them.

We saw, for example, how in Descartes's Discourse on Method, Descartes offers a hypothetical historical narrative to undermine the authority of the Aristotelians and a faith in an eternal unchanging natural order. Then, we discussed how in the Discourse on Inequality, based on an imaginative reconstruction of the history of human society, Rousseau, following Descartes's lead but extending it to other areas (and much more aggressively), can encourage in the mind of the reader the view that evil in life is the product of social injustice (than, say, the result of Original Sin or the lack of virtue in the lower orders). We have in addition of reading Kant, Marx, and Darwin how a historical understanding applied to particular phenomena undercut traditional notions of eternal truths enshrined in any particular beliefs (whether in species, in religious values, or in final purposes).

Nonetheless, this is a crucial point, the Enlightenment thinker, particularly Kant and Rousseau and Marx, do not allow history to undermine all sources of meaning; For them, beyond its unanswerable power to dissolve traditional authority, history holds out the promise of a new grounding for rational meaning, in the growing power of human societies to become rational, to, and in one word, progress. Thus, history, beyond revealing the inadequacies of many traditional power structures and sources of meaning, had also become the best hope and proof for firm faith in a new eternal order: The faith in progressive reform or revolution. This, too, is clearly something Wollstonecraft pins her hopes on (although, as we saw, how radical her emplacements continue as of a matter to debate).

On this point, as we also saw, Darwin, at least in the Origin of Species, is ambiguous - almost as if, knowing he is on very slippery ground, he doesn't want his readers to recognize the full metaphysical and epistemological implications of his theory of the history of life. Because of this probably deliberate ambiguities that we variously interpreted Darwin as offering either a "progressive" view of evolution, something that we could adapt to the Enlightenment's faith in rational progress or, alternatively, as presenting a contingent view of the history of life, a story without progress or final goal or overall purposes.

Well, in Nietzsche (as in the view of Darwin) there is no such ambiguity. Darwin made his theory public for the first time in a paper delivered to the Linnean Society in 1858. The paper begins, “All nature is at war, one organism with another, or with external natures.” In the Origins of Species, Darwin is more specific about the character of this war, “There must be in every species, or with the individuals of distinct species, or with another of the same species, or with the individuals of distinct species, or with the physical conditions of life.” All these assumptions are apparent in Darwin’s definition of natural selection: If under changing conditions of life organic beings present individual differences in almost every part of their structure, and this cannot be disputed, if there be, owing to their geometrical rate of an increase, a severe struggle for life at some age, season, or year, and this cannot be disputed, as then, considering the infinite complexity of the relations of all organic beings to each other and to their condition of life . . . this will tend to produce offspring similarly characterized. This principle of preservation, or the survival of the fittest is so called the Natural Selection.

Similarly, clusters of distributed brain areas undertake individual linguistic symbols and are not produced in a particular area. The specific sound patterns of words may be produced in dedicated regions. Nevertheless, the symbolic and referential relationship between words is generated through a convergence of neural code and decoded from different and independent brain regions. The processes of words comprehension and retrieval result from combinations of simpler associative processes in several separate brain regions that require an active role from other regions. The symbol meaning of words, like the grammar that is essential for the construction of meaningful relationships between strings of words, is an emergent property from the complex interaction of several brain parts.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing more about, the neuronal processes applied therein. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

With that, let us include two aspects of biological reality, its more complex order in biological reality may be associated with the emergence of new wholes that are greater than the parts, and the entire biosphere is a whole that displays regulating behaviour that is greater than the sum of its parts (the attributive view that all organisms (parts) are emergent aspects of the organizing process of life (whole), and that the proper way to understand the parts is to examine their embedded relations to the whole). If this is the case, the emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complex systems marked by the appearance of a new profound complementary relationship between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Nonetheless, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the organizing properties of biological life.

Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language systems that are particularly relevant for our purposes concerns consciousness of. Consciousness of as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this and other selves. As it is constructed in human subjective reality, is perceived as having an independent existence and a -referential character in a mental realm as separately distinct from the material realm. It was, moreover the assumed separation between these realms that led Descartes to posit his dualism to understand the nature of consciousness in the mechanistic classical universe.

Every schoolchild learns eventually that Nietzsche was the author of the shocking slogan, "God is dead." However, what makes that statements possible are another claim, even more shocking in its implications: "Only that which has no history can be defined" (Genealogy of Morals). Since Nietzsche was the heir to seventy-five years of German historical scholarship, he knew that there was no such thing as something that has no history. Darwin had, as Dewey points out that essay we examined, effectively shown that searching for a true definition of a species is not only futile but unnecessary (since the definition of a species is something temporary, something that changes over time, without any permanent lasting and stable reality). Nietzsche dedicates his philosophical work to doing the same for all cultural values.

Reflecting it for a moment on the full implications of this claim our study of moral philosophy with the dialectic exchange with which explores the question "What is virtue?" That takes a firm withstanding until we can settle that of the issue with a definition that eludes all cultural qualification. What virtue is, that we cannot effectively deal with morality, accept through divine dispensation, unexamined reliance on traditions, skepticism, or relativism (the position of Thrasymachus). The full exploration of what dealing with that question of definition might require takes’ place in the Republic.

Many texts we read subsequently took up Plato's challenge, seeking to discover, through reason, a permanent basis for understanding knowledge claims and moral values. No matter what the method, as Nietzsche points out in his first section, the belief was always that grounding knowledge and morality in truth was possible and valuable, that the activity of seeking to ground morality was conducive to a fuller good life, individually and communally.

To use a favourite metaphor of Nietzsche's, we can say that previous systems of thought had sought to provide a true transcript of the book of nature. They made claims about the authority of one true text. Nietzsche insists repeatedly that there be no single canonical text; There are only interpretations. So, there is no appeal to some definitive version of Truth (whether we search in philosophy, religion, or science). Thus the Socratic quest for some way to tie morality down to the ground, so that it does not fly away, is (and has always been) futile, although the long history of attempts to do so has disciplined the European mind so that we, or a few of us, are ready to move into dangerous new territory where we can situate the most basic assumptions about the need for conventional morality to the test and move on "Beyond Good and Evil," that is, to a place where we do not take the universalizing concerns and claims of traditional morality seriously.

Nietzsche begins his critique here by challenging that fundamental assumption: Who says that seeking the truth is better for human beings? How do we know an untruth is not better? What is truth anyway? In doing so, he challenges the sense of purpose basic to the traditional philosophical endeavour. Philosophers, he points out early, may be proud of the way they begin by challenging and doubting received ideas. However, they never challenge or doubt the key notion they all start with, namely, that there is such a thing as the Truth and that it is something valuable for human beings (surely much more valuable than its opposite).

In other words, just as the development of the new science had gradually and for many painfully and rudely emptied nature of any certainty about a final purpose, about the possibilities for ever agreeing of the ultimate value of scientific knowledge, so Nietzsche is, with the aid of new historical science (and the proto-science of psychology) emptying all sources of cultural certainty of their traditional purposiveness and claims to permanent truth, and therefore of their value, as we traditionally understood that of the term. There is thus no antagonism between good and evil, since all versions of equal are equally fictive (although some may be more useful for the purposes of living than others).

At this lodging within space and time, In really do not want to analyse the various ways Nietzsche deals with this question. Nevertheless, In do want to insist upon the devastating nature of his historical critique on all previous systems that have claimed to ground knowledge and morality on a clearly defined truth of things. For Nietzsche's genius rests not only on his adopting the historical critique and applying to new areas but much more on his astonishing perspicuity in seeing just how extensive and flexible the historical method might be.

For example, Nietzsche, like some of those before him, insists that value systems are culturally determined. they arise, he insists, as often as not form or in reaction to conventional folk wisdom. Yet to this he adds something that to us, after Freud, may be well accepted, but in Nietzsche's hands become something as shocking: Understanding of a system of value is, he claims, requires us more than anything else to see it as the product of a particular individual's psychological history, a uniquely personal confession. Relationship to something called the "Truth" has nothing to do with the "meaning" of a moral system; as an alternative we seek its coherence in the psychology of the philosopher who produced it.

Gradually, in having grown into a greater clarity of what every great philosophy has endearingly become, as staying in the main theme of personal confessions, under which a kind of involuntary and an unconscious memoir and largely that the moral (or immoral) intentions in every philosophy formed the real germ of life from which the whole plant had grown.

A concentration has here unmasked claims to “truth” upon the history of the life of the person proposing the particular "truth" this time. Systems offering us a route to the Truth are simply psychologically produced fictions that serve the deep (often unconscious) purposes of the individual proposing them. Therefore they are what Nietzsche calls "foreground" truths. They do not penetrate into the deep reality of nature, and, yet, to fail to see this is to lack "perspective."

Even more devastating is Nietzsche's extension of the historical critique to language it. Since philosophical systems deliver themselves to us in language, that language shapes them and by the history of that language. Our Western preoccupation with the inner for which perceivable determinates, wills, and so forth, Nietzsche can place a value on as, in large part, the product of grammar, the result of a language that builds its statements around a subject and a predicate. Without that historical accident, Nietzsche affirms, we would not have committed an error into mistaking for the truth something that is a by-product of our particular culturally determined language system.

He makes the point, for example, that our faith in consciousness is just an accident. If instead of saying "In think," we were to say "Thinking is going on in my body," then we would not be tempted to give the "I" some independent existence (e.g., in the mind) and make large claims about the ego or the inner. The reason we do search for such an entity stem from the accidental construction of our language, which encourages us to use a subject (the personal pronoun) and a verb. The same false confidence in language also makes it easy for us to think that we know clearly what key things like "thinking" and "willing" are; Whereas, if we were to engage in even a little reflection, we would quickly realize that the inner processes neatly summed up by these apparently clear terms is anything but clear. His emphasis on the importance of psychology as queen of the sciences underscores his sense of how we need to understand more fully just how complex these activities are, particularly the emotional appetites, before we talk about them so simplistically, the philosophers that concurrently have most recently done.

This remarkable insight enables Nietzsche, for example, at one blow and with cutting contempt devastatingly to dismiss as "trivial" the system Descartes had set up so carefully in the Meditations. Descartes's triviality consists in failing to recognize how the language he imprisons, shapes his philosophical system as an educated European, using and by his facile treatment of what thinking is in the first place. The famous Cartesian dualism is not a central philosophical problem but an accidental by-product of grammar designed to serve Descartes' own particular psychological needs. Similarly Kant's discovery of "new faculties" Nietzsche derides as just a trick of language - a way of providing what looks like an explanation and is, in fact, as ridiculous as the old notions about medicines putting people to sleep because they have the sleeping virtue.

It should be clear from examples that there is very little capability of surviving Nietzsche's onslaught, for what are there to which we can points to which did not have a history or deliver it to us in a historically developing system of language? After all, our scientific enquiries in all areas of human experience teach us that nothing is ever, for everything is always becoming.

We might be tempted, as many have been, to point to the new natural science as a counter-instance that typifies the dulling of natural science of a progressive realization of the truth of the world, or at least a closer and closer approximation to that truth? In fact, it is interesting to think about just how closely Kuhn and Nietzsche might be linked in their views about the relationship between science and the truth of things or to what extent modern science might not provide the most promising refutation of Nietzsche's assertion that there is no privileged access to a final truth of things (a hotly disputed topic in the last decade or more). It is say here that for Nietzsche science is just another "foreground" way of interpreting nature. It has no privileged access to the Truth, although he does concede that, compared with other beliefs, it has the advantage of being based on sense experience and therefore is more useful for modern times.

There is one important point to stress in this review of the critical power of Nietzsche's project. Noting that Nietzsche is not calling us to a task for having beliefs is essential. We have to have beliefs. Human life must be the affirmation of values; Otherwise, it is not life. Nonetheless, Nietzsche is centrally concerned to mock us for believing that our belief systems are True, are fixed, are somehow eternally right by a grounded standard of knowledge. Human life, in its highest forms, must be lived in the full acceptance that the values we create for ourselves are fictions. We, or the best of us, have to have the courage to face the fact that there is no "Truth" upon which to ground anything in which we believe; we must in the full view of that harsh insight, but affirm ourselves with joy. The Truth is not accessible to our attempts at discovery; What thinking human beings characteristically do, in their pursuit of the Truth, is creating their own truths.

Now, this last point, like the others, has profound implications for how we think of ourselves, for our conception of the human. Because human individuals, like human cultures, also have a history. Each of us has a personal history, and thus we ourselves cannot be defined; we, too, are in a constant process of becoming, of transcending the person we have been into something new. We may like to think of ourselves as defined by some essential rational quality, but in fact we are not. In stressing this, of course, Nietzsche links him with certain strains of Romanticism, especially with William Blake and with Emerson and Thoreau.

This tradition of Romanticism holds up a view of life that is radically individualistic, -created, -generated. "In must create my own system or become enslaved by another man's" Blake wrote. It is also thoroughly aristocratic, with little room for traditional altruism, charity, or egalitarianism. Our lives to realize their highest potential should be lived in solitude from others, except perhaps those few we recognize as kindred souls, and our life's efforts must be a spiritually demanding but joyful affirmation of the process by which we maintain the vital development of our imaginative conceptions of ourselves.

Contrasting this view of the as a constantly developing entity might be appropriate here, without essential permanence, with Marx's view. Marx, too, insists on the process of transformation of the and ideas of the, but for him, as we discussed, the material forces control the transformation of production, and these in turn are driven by the logic of history. It is not something that the individual takes charge of by an act of individual will, because individual consciousness, like everything else, emerges from and is dependent upon the particular historical and material circumstances, the stage in the development of production, of the social environment in which the individual finds him or her.

Nietzsche, like Marx, and unlike later Existentialists, de Beauvoir, for example, recognizes that the individual inherits particular things from the historical moment of the culture (e.g., the prevailing ideas and, particularly, the language and ruling metaphors). Thus, for Nietzsche the individual is not totally free of all context. However, the appropriate response to this is not, as in Marx, the development of class consciousness, a solidarity with other citizens and an imperative to help history along by committing one to the class war alongside other proletarians, but in the best and brightest spirits, a call for a heightened sense of an individuality, of one's radical separation from the herd, of one's final responsibility to one's own most fecund creativity.

Because Nietzsche and the earlier Romantics are not simply saying, we should do what we like is vital. They all have a sense that -creation of the sort they recommend requires immense spiritual and emotional discipline -the discipline of the artist shaping his most important original creation following the stringent demands of his creative imagination. These demands may not be rational, but they are not permissively relativistic in that 1960's sense ("If it feels good, do it"). Permissiveness may have often been attributed to this Romantic tradition, a sort of 1960's “Boogie til you drop" ethic, but that is not what any of them had in mind. For Nietzsche that would simply be a herd response to a popularized and bastardized version of a much higher call to a solitary life lived with the most intense but personal joy, suffering, insight, courage, and imaginative discipline.

This aspect of Nietzsche's thought represents the fullest nineteenth-century European affirmation of a Romantic vision of the as radically individualistic (at the opposite end of the spectrum from Marx's views of the as socially and economically determined). It has had, a profound and lasting effect in the twentieth century as we become ever more uncertain about coherent social identities and thus increasingly inclined to look for some personal way to take full charge of our own identities without answering to anyone but ourselves.

Much of the energy and much of the humour in Nietzsche's prose comes from the urgency with which he sees such creative -affirmation as essential if the human species is not going to continue to degenerate. For Nietzsche, human beings are, primarily, biological creatures with certain instinctual drives. The best forms of humanity are those of whom most excellently express the most important of these biological drives, the "will to power," by which he means the individual will to assume of one and create what he or she needs, to live most fully. Such a "will to power" is beyond morality, because it does not answer to anyone's system of what makes up good and bad conduct. The best and strongest human beings are those of whom create a better quality in values for themselves, live by them, and refuse to acknowledge their common links with anyone else, other than other strong people who do the same and are thus their peers.

His surveys of world history have convinced Nietzsche that the development of systems has turned this basic human drive against human beings of morality favouring the weak, the suffering, the sick, the criminal, and the incompetent (all of whom he lumps together in that famous phrase "the herd"). He salutes the genius of those who could accomplish this feat (especially the Jews and Christians), which he sees as the revenge of the slaves against their natural masters. From this century -long acts of revenge, human beings are now filled with feelings of guilt, inadequacy, jealousy, and mediocrity, a condition alleviated, if at all, by dreams of being helpful to others and of an ever-expanding democracy, an agenda powerfully served by modern science (which serves to bring everything and everyone down to the same level). Fortunately, however, this ordeal has trained our minds splendidly, so that the best and brightest (the new philosophers, the free spirits) can move beyond the traditional boundaries of morality, that is, "beyond good and evil" (his favourite metaphor for this condition is the tensely arched bow ready to shoot off an arrow).

Stressing it is important, as In mentioned above, that Nietzsche does not believe that becoming such a "philosopher of the future" is easy or for everyone. It is, by contrast, an extraordinarily demanding call, and those few capable of responding to it might have to live solitary lives without recognition of any sort. He is demanding an intense spiritual and intellectual discipline that will enable the new spirit to move into territory no philosopher has ever roamed before, a displacing medium where there are no comfortable moral resting places and where the individual will probably (almost unquestionably) has to pursue of a profoundly lonely and perhaps dangerous existence (so the importance of another favourite metaphor of his, the mask). Nevertheless, this is the only way we can counter the increasing degeneration of European man into a practical, democratic, technocratic, altruistic herd animal.



Placing the analogy on the table, however, In wish to issue a caveat. Analogies may really help to clarify, but they can also influence us by some unduly persuasive influences of misleading proportions. In hope that the analogy In offer will provide such clarity, but not at the price of oversimplifying. So, as you listen to this analogy, you need to address the questions: To what extent does this analogy not hold? To what extent does it reduce the complexity of what Nietzsche is saying into a simpler form?

The analogy In want to put on the table is the comparison of human culture to a huge recreational complex in which several different games are going on. Outside people are playing soccer on one field, rugby on another, American football on another, and Australian football on another, and so on. In the club house different groups of people are playing chess, dominoes, poker, and so on. There are coaches, spectators, trainers, and managers involved in each game. Surrounding the recreation complex is wilderness.

These games we might use to characterize different cultural groups: French Catholics, German Protestants, scientists, Enlightenment rationalists, European socialists, liberal humanitarians, American democrats, free thinkers, or what possesses you. The variety represents the rich diversity of intellectual, ethnic, political, and other activities.

The situation is not static of course. Some games have far fewer players and fans, and the popularity is shrinking; Some are gaining popularity rapidly and increasingly taking over parts of the territory available. Thus, the traditional sport of Aboriginal lacrosse is but a small remnant of what it was before contact. However, the Democratic capitalist game of baseball is growing exponentially, as is the materialistic science game of archery. They might combine their efforts to create a new game or merge their leagues.

When Nietzsche looks at Europe historically, what he sees is that different games have been going on like this for centuries. He further sees that many participants in anyone game has been aggressively convinced that their game is the "true" game, which it corresponds with the essence of games or is a close match to the wider game they imagine going on in the natural world, in the wilderness beyond the playing fields. So they have spent much time producing their rule books and coaches' manuals and making claims about how the principles of their game copy or reveal or approximate the laws of nature. This has promoted and still promotes a good deal of bad feeling and fierce arguments. Therefore, in addition anyone game it, within the group pursuing it there has always been all sorts of sub-games debating the nature of the activity, refining the rules, arguing over the correct version of the rule book or about how to educate the referees and coaches, and so on.

Nietzsche's first goal is to attack this dogmatic claim about the truth of the rules of any particular game. He does this, in part, by appealing to the tradition of historical scholarship that shows that these games are not eternally true, but have a history. Rugby began when a soccer player broke the rules and picked up the ball and ran with it. American football developed out of rugby and has changed and is still changing. Basketball had a precise origin that can be historically found.

Rule books are written in languages that have a history by people with a deep psychological point to prove: The games are an unconscious expression of the particular desires of inventive game’s people at a very particular historical moment; these rule writers are called Plato, Augustine, Socrates, Kant, Schopenhauer, Descartes, Galileo, and so on. For various reasons they believe, or claim to believe, that the rules they come up with reveals something about the world beyond the playing field and are therefore "true" in a way that other rule books are not; they have, as it was, privileged access to reality and thus record, to use a favourite metaphor of Nietzsche's, the text of the wilderness.

In attacking such claims, Nietzsche points out, the wilderness bears no relationship at all to any human invention like a rule book; He points out that nature is "wasteful beyond measure, without purposes and consideration, without mercy and justice, fertile and desolate and uncertain simultaneously: Imagine indifference it as a power -how could you live according to this indifference. Living-is that not precisely wanting to be other than this nature.” Because there is no connection with what nature truly is, such rule books are mere "foreground" pictures, fictions dreamed up, reinforced, altered, and discarded for contingent historical reasons. Moreover, the rule manuals often bear a suspicious resemblance to the rules of grammar of a culture, thus, for example, the notion of an ego as a thinking subject, Nietzsche points out, is closely tied to the rules of European languages that insist on a subject and verb construction as an essential part of any statement.

So how do we know what we have is the truth? Why do we want the truth, anyway? People seem to need to believe that their games are true, but why? Might they not be better if they accepted that their games were false, were fictions, deal with the reality of nature beyond the recreational complex? If they understood the fact that everything they believe in has a history and that, as he says in the Genealogy of Morals, "only that which has no history can be defined," they would understand that all this proud history of searching for the truth is something quite different from what philosophers who have written rule books proclaim.

Furthermore these historical changes and developments occur accidentally, for contingent reasons, and have nothing to do with the games, or anyone game, shaping it according to any ultimate game or any given rule book of games given by the wilderness, which is indifferent to what is going on. There is no basis for the belief that, if we look at the history of the development of these games, we discover some progressive evolution of games toward some higher type. We may be able, like Darwin, to trace historical genealogies, to construct a narrative, but that narrative does not reveal any clear direction or any final goal or any progressive development. The genealogy of games suggests that history be a record of contingent change. The assertion that there is such a thing as progress is simply another game, another rule added by inventive minds (who need to believe in progress); it bears no relationship to nature beyond the sports complex.

While one is playing on a team, one follows the rules and thus has a sense of what form right and wrong or good and evil conduct in the game. All those carrying out the same endeavour share this awareness. To pick up the ball in soccer is evil (unless you are the goalie), and to punt the ball while running in American football is permissible but stupid; in Australian football both actions are essential and right. In other words, different cultural communities have different standards of right and wrong conduct. The artificial inventions have determined these called rule books, one for each game. These rule books have developed the rules historically; Thus, they have no permanent status and no claim to privileged access.

Now, at this point you might be thinking about the other occasion in which In introduced a game analogy, namely, in the discussions of Aristotle's Ethics. For Aristotle also acknowledges that different political systems have different rules of conduct. Still, Aristotle believes that an examination of different political communities will enable one to derive certain principles common to them all, bottom-up generalizations that will then provide the basis for reliable rational judgment on which game is being played better, on what was good play in any particular game, on whether or not a particular game is being conducted well or not.

In other words, Aristotle maintains that there is a way of discovering and appealing to some authority outside any particular game to adjudicate moral and knowledge claims that arise in particular games or in conflicts between different games. Plato, of course, also believed in the existence of such a standard, but proposed a different route to discovering it.

Now Nietzsche emphatically denies this possibility. Anyone who tries to do what Aristotle recommends is simply inventing another game (we can call it Super-sport) and is not discovering anything true about the real nature of games because they do not organize reality (that has the wilderness surrounding us) as a game. In fact, he argues, that we have created this recreational complex and all the activities that go on in it to protect themselves from nature (which is indifferent to what we do with our lives), not to copy some recreational rule book that wilderness reveals. Human culture exists as an affirmation of our opposition or to contrast with nature, not as an extension of rules that include both human culture and nature. That is why falsehoods about nature might be a lot more useful than truths, if they enable us to live more fully human lives.

If we think of the wilderness as a text about reality, as the truth about nature, then, Nietzsche claims, we have no access at all to that text. What we do have accessed to conflicting interpretations, none of them based on privileged access to a "true" text. Thus, the soccer players may think them and their game is superior to rugby and the rugby players, because soccer more closely represents the surrounding wilderness, but such statements about better and worse are irrelevant. There is nothing a rule bound outside the games themselves. Therefore, all dogmatic claims about the truth of all games or any particular game are false.

Now, how did this situation come about? Well, there was a time when all Europeans played almost the same game and had done so for many years. Having little-to-no historical knowledge and sharing the same head coach in the Vatican and the same rule book, they believed that the game was the only one possible and had been around for ever. So they naturally believed that their game was true. They shored up that belief with appeals to scripture or to eternal forms, or universal principles or to rationality or science or whatever. There were many quarrels about the nature of ultimate truth, that is, about just how one should tinker with the rule book, about what provided access to God's rules, but there was agreement that such excess must exist.

Take, for example, the offside rule in soccer. Without that the game could not continue in its traditional way. Therefore, soccer players see the offside rule as an essential part of their reality, and since soccer is the only game in town and we have no idea of its history (which might, for example, tell us about the invention of the off-side rule), then the offside rule is easy to interpret as a universal, a requirement for social activity, and we will find and endorse scriptural texts that reinforce that belief. Our scientists will devote their time to linking the offside rule with the mysterious rumblings that come from the forest. From this, one might be led to conclude that the offside rule is a Law of Nature, something that extends far beyond the realms of our particular game into all possible games and, beyond those, into the realm of the wilderness it.

Of course, there were powerful social and political forces (the coach and trainers and owners of the team) who made sure that people had lots of reasons for believing in the unchanging verity of present arrangements. So it is not surprising that we find plenty of learned books, training manuals, and locker room exhortations urging everyone to remember the offside rule and to castigate as "bad" those who routinely forget that part of the game. We will also worship those who died in defence of the offside rule. Naturally any new game that did not recognize the offside rule would be a bad game, an immoral way to conduct one. So if some group tried to start a game with a different offside rule, that group would be attacked because they had violated a rule of nature and were thus immoral.

However, for contingent historical reasons, Nietzsche argues, that situation of one game in town did not last. The recreational unity of the area divides the developments in historical scholarships into past demonstrations, in that all too clearly there is an overwhelming amount of evidence that all the various attempts to show that one specific game was exempted over any of all other true games, as they are false, dogmatic, trivial, deceiving, and so on.

For science has revealed that the notion of a necessary connection between the rules of any game and the wider purposes of the wilderness is simply an ungrounded assertion. There is no way in which we can make the connections between the historically derived fictions in the rule book and the mysterious and ultimately unknowable directions of irrational nature. To conform of science, we have to believe in causes and effects, but there is no way we can prove that this is a true belief and there is a danger for us if we simply ignore that fact. Therefore, we cannot prove a link between the game and anything outside it. History has shown us, just as Darwin's natural history has proved, that all apparently eternal issues have a story, a line of development, a genealogy. Thus, notions, like species, have no reality-they are temporary fiction imposed for the sake of defending a particular arrangement.

So, God is dead. There is no eternal truth anymore, no rule book in the sky, no ultimate referee or international Olympic committee chair. Nietzsche did not kill God; History and the new science did. Nietzsche is only the most passionate and irritating messenger, announcing over the PA system to anyone who will listen that an appeal to a system can defend someone like Kant or Descartes or Newton who thinks that what he or she is doing grounded in the truth of nature has simply been mistaken.

This insight is obvious to Nietzsche, and he is troubled that no one is worried about it or even to have noticed it. So he's moved to call the matter to our attention as stridently as possible, because he thinks that this realization requires a fundamental shift in how we live our lives.

For Nietzsche Europe is in crisis. It has a growing power to make life comfortable and an enormous energy. However, people seem to want to channel that energy into arguing about what amounts to competing fictions and to force everyone to follow particular fictions.

Why is this insight so worrying? Well, one point is that dogmatists get aggressive. Soccer players and rugby players who forget what Nietzsche is pointing out can start killing each other over questions that admit of no answer, namely, question about which group has the true game, which ordering has a privileged accountability to the truth. Nietzsche senses that dogmatism is going to lead to warfare, and he predicts that the twentieth century will see an unparalleled extension of warfare in the name of competing dogmatic truths. Part of his project is to wake up the people who are intelligent enough to respond to what he is talking about so that they can recognize the stupidity of killing each other for an illusion that they misunderstand for some "truth."

Besides that, Nietzsche, like Mill (although, in a very different way), is seriously concerned about the possibilities for human excellence in a culture where the herd mentality is taking over, where Europe is developing into competing herds -a situation that is either sweeping up the best and the brightest or stifling them entirely. Nietzsche, like Mill and the ancient pre-Socratic Greeks to whom he constantly refers, is an elitist. He wants the potential for individual human excellence to be liberated from the harnesses of conformity and group competition and conventional morality. Otherwise, human beings are going to become destructive, lazy, conforming herd animals, using technology to divert them from the greatest joys in life, which come only from individual striving and creativity, activities that require one to release one's instincts without keeping them eternally subjugated to controlling historical consciousness or a conventional morality of good and evil.

What makes this particularly a problem for Nietzsche is that he sees that a certain form of game is gaining popularity: Democratic volleyball. In this game, the rule book insists that all players be treated equally, that there be no natural authority given to the best players or to those who understand the nature of quality play. Therefore the mass of inferior players is taking over, the quality of the play is deteriorating, and there are fewer and fewer good volleyball players. This process is being encouraged both by the traditional ethic of "help your neighbour," now often in a socialist uniform and by modern science. As the mass of more many inferior players takes over the sport, the mindless violence of their desires to attack other players and take over their games increases, as does their hostility to those who are uniquely excellent (who may need a mask to prevent themselves being recognized).

The hopes for any change in this development are not good. In fact, things might be getting worse. For when Nietzsche looks at all these games going on he notices certain groups of people, and the prospect is not totally reassuring.

First there remain the overwhelming majority of people: the players and the spectators, those caught up in their particular sport. These people are, for the most part, continuing as before without reflecting or caring about what they do. They may be vaguely troubled about rumours they hear that their game is not the best, they may be bored with the endless repetition in the schedule, and they have essentially reconciled them that they are not the only game going on, but they had rather not thought about it. Or else, stupidly confident that what they are doing is what really matters about human life, is true, they preoccupy themselves with tinkering with the rules, using the new technology to get better balls, more comfortable seats, louder whistles, more brightly painted side lines, more trendy uniforms, tastier Gatorade-all in the name of progress.

Increasing numbers of people are moving into the stands or participating through the newspaper or the television sets. Most people are thus, in increasing numbers, losing touch with themselves and their potential as instinctual human beings. They are the herd, the last men, preoccupied with the trivial, unreflectingly conformist because they think, to the extent they think at all, that what they do will bring them something called "happiness." Yet they are not happy: They are in a permanent state of narcotized anxiety, seeking new ways to entertain themselves with the steady stream of marketed distractions that the forces of the market produce: Technological toys, popular entertainment, college education, Wagner's operas, academic jargon.

This group, of course, includes all the experts in the game, the cheerleaders whose job it is to keep us focussed on the seriousness of the activity, the sports commentators and pundits, whose life is bound up with interpreting, reporting, and classifying players and contests. These sportscasters are, in effect, the academics and government experts, the John Maddens and Larry Kings and Mike Wallaces of society, those demigods of the herd, whose authority derives from the false notion that what they are dealing with is something other than a social-fiction.

There is a second group of people, who have accepted the ultimate meaninglessness of the game in which they were. They have moved to the sidelines, not as spectators or fans, but as critics, as cynics or nihilists, dismissing out of hand all the pretensions of the players and fans, but not affirming anything themselves. These are the souls who, having nothing to will (because they have seen through the fiction of the game and have therefore no motive to play any more), prefer to will nothing in a state of paralysed skepticism. Nietzsche has a certain admiration for these people, but maintains that a life like this, the nihilist on the sidelines, is not a human life.

For, Nietzsche insists, to live as a human being, is to play a game. Only in playing a game can one affirm one's identity, can one create values, can one truly exist. Games are the expression of our instinctual human energies, our living drives, what Nietzsche calls our "will to power." So the nihilistic stance, though understandable and, in a sense, courageous, is sterile. For we are born to play, and if we do not, then we are not fulfilling a worthy human function. Also, we have to recognize that all games are equally fictions, invented human constructions without any connections to the reality of things.

So we arrive at the position of the need to affirm a belief (invent a rule book) which we know to have been invented, to be divorced from the truth of things. To play the best game is to live by rules that we invent for ourselves as an assertion of our instinctual drives and to accept that the rules are fictions: they matter, we accept them as binding, we judge ourselves and others by them, and yet we know they are artificial. Just as in real life a normal soccer player derives a sense of meaning during the game, affirms his or her value in the game, without ever once believing that the rules of soccer have organized the universe or that those rules have any universal validity, so we must commit ourselves to epistemological and moral rules that enable us to live our lives as players, while simultaneously recognizing that these rules have no universal validity.

To base one's life on the creative tensions of the artist engaged with creating a game that meets most eloquently and uncompromisingly the demand of one's own irrational nature-one's wish-is to be most fully free, most fully human.

This call to live the -created life, affirming one in a game of one's own devising, necessarily condemns the highest spirits to loneliness, doubt, insecurity, emotional suffering, because most people will mock the new game or be actively hostile to it or refuse to notice it, and so on; Alternatively, they will accept the challenge but misinterpret what it means and settle for some marketed easy game, like floating down the Mississippi smoking a pipe. Nevertheless, a -generated game also brings with-it the most intense joy, the most playful and creative affirmation of what is most important in our human nature.

Noting here that one’s freedom to create is important one's own game is limited. In that sense, Nietzsche is no existentialist maintaining that we have a duty and an unlimited freedom to be whatever we want to be. For the resources at our disposable parts of the field still available and the recreational material lying around in the club house-are determined by the present state of our culture. Furthermore, the rules I devise and the language for which I frame them in will ordinarily owes a good deal to the present state of the rules of other games and the state of the language in which those are expressed. Although in changing the rules for my game, my reference point, or the rules, I have existentially placed in change. It is, nonetheless, a given application that has been allotted to me by way of my moment in history, in that of creating something that will transcend the past. These, of existing diversions are ramifications that expose in the materials from which I have fashioned in a new, and, perhaps, more effectively of a dividing source of entertainment.

Thus, the new philosopher will transcend the limitations of the existing games and will extend the catalogue of games with the invention of new ones, but that new creative spirit faces certain historical limitations. If this is relativistic, it is not totally so.

The value of this endeavour is not to be measured by what other people think of the newly created game; Nor does its value lie in fame, material rewards, or service to the group. Its value comes from the way it enables the individual to manifest certain human qualities, especially the will to power. Nonetheless, it seems that whether or not the game attracts other people and becomes a permanent fixture on the sporting calendar, something later citizens can derive enjoyment from or even remember, that is irrelevant. For only the accidents of history determination of whether the game invented is for my-own attractions in other people, that is, becomes a source of value for them.

Nietzsche claims that the time is right for such a radically individualistic endeavour to create new games, new metaphors for my life. For, wrongheaded as many traditional games may have been, like Plato's metaphysical soccer or Kant's version of eight balls, or Marx's materialist chess tournament, or Christianity's stoical snakes and ladders, they have splendidly trained us for the much more difficult work of creating values in a spirit of radical uncertainty. The exertions have trained our imaginations and intelligence in useful ways. So, although those dogmatists were unsound, an immersion in their systems has done much to refine those capacities we most need to rise above the nihilists and the herd.

Now, In have put this analogy on the table to help clarify some central points about Nietzsche. However, the metaphor is not so arbitrary as it may appear, because this very notion of systems of meanings as invented games is a central metaphor of the twentieth century thought and those who insist upon it as often as not point to Nietzsche as their authority.

So, for example, when certain postmodernists insist that the major reason for engaging in artistic creativity or literary criticism or any form of cultural life be to awaken the spirit of creative play that is far more central than any traditional sense of meaning or rationality or even coherence, we can see the spirit of Nietzsche at work.

Earlier in this century, as we will see in the discussions of early modern art, a central concern was the possibility of recovering some sense of meaning or of recreating or discovering a sense of "truth" of the sort we had in earlier centuries, or, as we will see in the poetry of Eliot, lamenting the collapse of traditional systems of value. Marxists were determined to assist history in producing the true meaning toward which we were inexorably heading. To the extent that we can characterize post-modernism simply at all, we might say that it marks a turning away from such responses to the modern condition and an embrace, for better or worse, of Nietzsche, joyful -affirmation in a spirit of the irrationality of the world and the fictive qualities of all that we create to deal with life.

After this rapid and, in hope, useful construction and description of an analogy, as only one final point that remains: So how have we responded and are we still responding to all of this? What of an impact has this powerful challenge to our most confident traditions had? Well, there is not time here to trace the complex influence of Nietzsche's thought in a wide range of areas. That influence has been immense and continues still. However, In would like to sketch a few points about what may be happening right now.

Here I must stress that in an offering a personal review, which comprehensively does not affect this question. Still, any general reading in modern studies of culture suggests that responses to Nietzsche are important and diverse. His stock has been very bullish for the past two decades, at least.

One group we can quickly identify is those who have embraced Nietzsche's critique, who appeal to his writing to endorse their view that the search to ground our knowledge and moral claims in Truth are futile, and that we must therefore recognize the imperative Nietzsche laid before us to -create our own lives, to come up with new -descriptions affirming the irrational basis of our individual humanity. This position has been loosely termed Antifoundationalism. Two of its most prominent and popular spokespersons in recent years have been Richard Rorty and Camille Paglia. Within Humanities departments the Deconstructionists (with Derrida as their guru) head the Nietzschean charge.

Antifoundationalists supportively link Nietzsche closely with Kuhn and with Dewey (whose essay on Darwin we read) and sometimes with Wittgenstein and take central aim at anyone who would claim that some form of enquiry, like science, rational ethics, Marxism, or traditional religion has any form of privileged access to reality or the truth. The political stance of the Antifoundationalists tends to be radically romantic or pragmatic. Since we cannot ground our faith in any public morality or political creed, politics becomes something far less important than personal development or else we have to conduct our political life simply on a pragmatic basis, following the rules we can agree on, without according those rules any universal status or grounding in eternal principles. If mechanistic science is something we find, for accidental reasons of history, something useful, then we will believe it for now. Thus, Galileo's system became adopted, not because it was true or closer to the truth that what it replaced, but simply because the vocabulary he introduced inside our descriptions was something we found agreeable and practically helpful. When it ceases to fulfill our pragmatic requirements, we will gradually change to another vocabulary, another metaphor, another version of a game. History shows that such a change will occur, but how and when it will take place or what the new vocabulary might be-these questions will be determined by the accidents of history.

Similarly, human rights are important, not because there is any rational non-circular proof that we ought to act according to these principles, but simply because we have agreed, for accidental historical reasons, that these principles are useful. Such pragmatic agreements are all we have for public life, because, as Nietzsche insists, we cannot justify any moral claims by appeals to the truth. So we can agree about a schedule for the various games and distributing the budget between them and we can, as a matter of convenience, set certain rules for our discussions, but only as a practical requirement of our historical situation, least of mention, not by any divine or rationality that of any system contributes of its distributive cause.

A second response is to reject the Antifoundationalist and Nietzschean claim that no language has privileged contact to the reality of things, to assert, that is, that Nietzsche is wrong in his critique of the Enlightenment. Plato's project is not dead, as Nietzsche claimed, but alive and well, especially in the scientific enterprise. We are discovering ever more about the nature of reality. There may still be a long way to go, and nature might be turning out to be much more complex than the early theories suggested, but we are making progress. By improving the rule book we will modify our games so that they more closely approximate the truth of the wilderness.

To many scientists, for example, the Antifoundationalist position is either irrelevant or just plain wrong, an indication that social scientists and humanity’s types do not understand the nature of science or are suffering a bad attack of sour grapes because of the prestige the scientific disciplines enjoy in the academy. The failure of the social scientists (after generations of trying) to come up with anything approaching a reliable law (like, say, Newton's laws of motion) has shown the pseudoscientific basis of the disciplines, and unmasks their turn to Nietzschean Antifoundationalism as a feeble attempt to justify their presence in the modern research university.

Similarly, Marxists would reject Antifoundationalism as a remnant of aristocratic bourgeois capitalism, an ideology designed to take intellectuals' minds off the realities of history, the truth of things. There is a truth grounded in a materialist view of history, renouncing that simply of diverting intellectuals away from social injustice. No wonder the most ardent Nietzscheans in the university have no trouble getting support from the big corporate interests to and their bureaucratic subordinates: The Ford Foundation, the Guggenheim Foundation, and the National Endowment for the Humanities. Within the universities and many humanities and legal journals, some liveliest debates go on between the Antifoundationalists allied and the Deconstructionists under the banner of Nietzsche and the historical materialists and many feminists under the banner of Marx.

Meanwhile, there has been a revival of interest in Aristotle. The neo-Aristotelians agree with Nietzsche's critique of the Enlightenment rational project-that we are never going to be able to derive a sense of human purpose from scientific reason-but assert that sources of value and knowledge are not simply a contingent but arise from communities and that what we need to sort out our moral confusion is a reassertion of Aristotle's emphasis on human beings, not as radically individual with an identity before their political and social environment, but moderate political animals, whose purpose and value are deeply and essentially rooted in their community. A leading representative for this position is Alisdair McIntyre.

Opposing such a communitarian emphasis, a good deal of the modern Liberal tradition points out that such a revival of traditions simply will not work. The break down of the traditional communities and the widespread perception of the endemic injustice of inherited ways is something that cannot be reversed (appeals to Hobbes here are common). So we need to place our faith in the rational liberal Enlightenment tradition, and look for universal rational principles, human rights, rules of international morality, justice based on an analysis of the social contract, and so on. An important recent example such a view is Rawls' famous book Social Justice.

Finally, there are those who again agree with Nietzsche's analysis of the Enlightenment and thus reject the optimistic hopes of rational progress, but who deny Nietzsche's proffered solution. To see life as irrational chaos that we must embrace and such joyous affirmation as the value-generating activity in our human lives, while at the same time recognizing its ultimate meaninglessness to the individual, too many people seem like a prescription for insanity. What we, as human beings, must have to live a fulfilled human life is an image of eternal meaning. This we can derive only from religion, which provides for us, as it always has, a transcendent sense of order, something that answers to our essential human nature far more deeply than either the Enlightenment faith in scientific rationality or Nietzsche's call to a life of constantly metaphorical -definition.

To read the modern debates over literary interpretation, legal theory, human rights issues, education curriculums, feminist issues, ethnic rights, communitarian politics, or a host of other similar issues is to come repeatedly across the clash of these different positions (and others). To use the analogy In started with, activities on the playing fields are going on more energetically than ever. Right in the middle of most of these debates and generously scattered throughout the footnotes and bibliographies, Nietzsche's writings are alive and well. To that extent, his ideas are still something to be reckoned with. He may have started by shouting over the intercom system in a way no to which one bothered to attend; now on many playing fields, the participants and fans are considering and reacting to his analysis of their activities. So Nietzsche today is, probably more than ever before in this century, right in the centre of some vital debates over cultural questions.

You may recall how, in Book Ten of the Republic, Plato talks about the "ancient war between poetry and philosophy." What this seems to mean from the argument is an ongoing antagonism between different uses of language, between language that seeks above all, denotative clarity the language of exact definitions and precise logical relationships and language whose major quality is its ambiguous emotional richness, between, that is, the language of geometry and the language of poetry (or, simply put, between Euclid and Homer)

Another way of characterizing this dichotomy is to describe it as the intensive force between a language appropriates and discovering the truth and one appropriate to creating it, between, that is, a language that sets it up as an exact description of a given order (or as exactly presently available) and a language that sets it up as an ambiguous poetic vision of or an analogy to a natural or cosmic order.

Plato, in much of what we studied, seems clearly committed to a language of the former sort. Central to his course of studies that will produce guardian rulers is mathematics, which is based upon the most exact denotative language we know. Therefore, the famous inscription over the door of the Academy: "Let no one enter here who has not studied geometry." Underlying Plato's remarkable suspicion of a great deal of poetry, and particularly of Homer, is this attitude to language: Poetic language is suspect because, being based on metaphors (figurative comparisons or word pictures), it is a third remove from the truth. In addition, it speaks too strongly to the emotions and thus may unbalance the often tense equilibrium needed to keep the soul in a healthy state.

One needs to remember, however, that Plato's attitude to language is very ambiguous, because, in spite of his obvious endorsement of the language of philosophy and mathematics, in his own style he is often a poet, a creator of metaphor. In other words, there is a conflict between his strictures on metaphor and his adoption of so many metaphors (the central one of some dramatic dialogues is only the most obvious). Many famous and influential passages from the Republic, for example, are not arguments but poetic images or fictional narratives: The Allegory of the Cave, the image of the Sun, the Myth of Er.

Plato, in fact, has always struck me as someone who was deeply suspicious about poetry and metaphor because he responded to it so strongly. Underlying his sometimes harsh treatment of Homer may be the imagination of someone who is all too responsive to it (conversely, and Aristotle’s more lenient view of poetry may stem from the fact that he did not really feel its effects so strongly). If we were inclined to adopt Nietzsche's interpretation of philosophy, we might be tempted to see in Plato's treatment of Homer and his stress on the dangers of poetic language his own "confession" of weakness. His work is, in part, an attempt to fight his own strong inclinations to prefer metaphoric language.

If we accept this characterization of the "ancient war" between two different uses of language, then we might want to ask ourselves why they cannot be reconciled. Why must there be a war? This has, in part, to do with the sorts of questions one wants to ask about the nature of things and about the sorts of answers that the enquiring mind requires. For traditionally there have been some important differences between the language of mathematics or geometry or a vocabulary that seeks to approximate the denotative clarity of these disciplines and the language of poetry. The central difference In would like to focus on is the matter of ambiguity.

The terminological convictions of mathematics and especially of Euclidean geometry, are characterized, above all, by denotative clarity and of precise definitions, clear axioms, firm logical links between statements all of which are designed to produce a rationally coherent structure that will compel agreement among those who take the time to work their way through the system. The intellectual and aesthetic pleasures of Euclid, In would maintain, arise, in large part, from this. People who want this sort of clarity in their understanding of the world will naturally be drawn to define as acceptable questions and answers which frame themselves in a language that seeks this sort of clarity.

Poetical language, by contrast, is inherently ironic, ambiguous, elusive. When I move from clear definition to metaphor, that is, to a comparison, or to a narrative that requires interpretation (like the Book of Exodus, for example, or the Iliad) then my statement requires interpretation, an understanding that an appeal to exact definitions and clear rules of logic cannot quickly satisfy. To agree about metaphor requires explanation and persuasion of a sort different from what is required to get people to accept the truths of Euclidean geometry.

For example, if I have trouble with the statement "The interior angles of a triangle add up to two right angles," In can find exact definitions of all the terms, In can review the step-by-step logical process that leads from -evident first principles to this statement, and In then understand exactly what this means. In am rationally compelled to agree, provided the initial assumptions and the logical adequacy of the process do not disturb me. In am able to explain the claim to someone else, so that he or she arrives at the same understanding of the original statement about the sum of the interior angles (the compelling logic of this form of language is, of course, the point of the central section of Plato's Memo, Socrates's education of Memo's slave in the Pythagorean Theorem)

Nonetheless, a claim like "My love is like a red, red rose" is of a different order. In can check the dictionary definitions of all the words, but that by it will not be enough. How do In deal with the comparison? In can go out and check whether my love has thorns on her legs or her hair falls off after a few days standing in water, but that is not going to offer much help, because obviously In am not meant to interpret this statement literally: a comparison, a metaphor is involved. An understanding of the statement requires that In interpret the comparison: What is the range of association summoned up by the metaphor that compares my beloved or my feelings for my beloved to a common flower?

On this point, if we sit discussing the matter, we are likely to disagree or at least fail to reach the same common rational understanding that we derived from our study of the first statement concerning the interior angles of the triangle. If we want to agree on the metaphor, then we are going to have to persuade each other, and even then our separate understandings may not be congruent.

We have had direct experience of this in Liberal Studies. When we discussed Euclid, we had nothing to argue about. The discussions focussed on whether or not everyone understood the logical steps involved, the definitions and axioms, and possible alternative logical methods. Nevertheless, no one offered seriously as an interpretative opinion that the interior angles of a triangle might add up to three right angles or one and a half right angle. If someone had claimed that, then we would have maintained that he or she had failed in some fundamental way to follow the steps in the proofs. By contrast, when we discussed, say, King Lear or the Tempest or Jane Eyre or Red and Black, we spent most of our time considering alternative interpretations of particular episodes, and we did not reach any precisely defined shared conclusion. Nor could be that we, if we spent the entire if times debating the issue?

It looks of no doubt a vast oversimplification to present the issue of language solely about these two diametrically opposed ways, but for the sake of discussion it is a useful starting point. We might go on to observe that, again to make a vast oversimplification, people tend to prefer one use of language over another: Some like their verbal understandings of things clear, precise, logically sound, so that there is the possibility of a universally recognized meaning with minimum ambiguity, or as close as we can get to such a goal. Others prefer the ambiguity and emotional richness of metaphor, although (or because) the price of such a language is an inherent irony, a multiplicity of meanings, the suggestion of no simple, shared, precise, final meaning.

The question of the language appropriate to a proper understanding of things is particularly important for a comprehension of the history of Christianity, too, because, as we all know, Christianity takes as its central text a book full of poetry, narrative, imagery. Faith in what this book "means" or what it "reveals" about the nature of the divinity is a central part of being a Christian. Many, urgent and contumacious disputes in the history of Christianity have arisen out of the metaphorical nature of this holy text: Since metaphors and metaphorical narratives are inherently ambiguous, they need interpretation, whose interpretations are decisive in any disagreement becomes a vital concern.

Controlling the text and maintaining the authority to determine interpretations of the holy text were always a central imperative of the medieval Catholic Church, which recognized very clearly and correctly that to give people (even parish priests) access to the Bible would result in interpretative anarchy. So, the Catholic Church's strict control of the book, its refusal to distribute it widely or to translate it into the common language of the people, and its insistence that the basis for popular sermons should be, not the Bible it, but the clear and unambiguous official interpretations condoned by the Vatican.

The Church's suspicion of the anarchy that would follow upon any general access to the Bible revealed it as correct once Luther's Reformation made the holy text generally available in translation. Suddenly, the enforced interpretative consensus dissolved, and scores of competing sects arose, each claiming a correct version of the truth derived through an interpretation of the metaphorical constructions in the Bible. An extreme (but not altogether uncommon) example was the war between the followers of Zwingli and the followers of Muntzer, two Protestant leaders, over whether the communion wafer was the body of Christ or symbolized the body of Christ and over the interpretation of baptism. Many thousands died in the quarrel over these interpretative questions.

Today such issues that involve killing others over the ontological status of a biscuit or bathwater may seem ridiculous, but the issue is not. An authority that derives from a poetical metaphorical text must rest, not on that text, but on a particular interpretation of it. Whoever is the spokesperson for the official interpretation has official power. Thus, from this point of view, one can interpret the religious wars of the sixteenth and seventeenth centuries as quarrelsome interpretation run amok.

Surely, the conclusion of the religious wars brought with it a demand to clean up language, to be wary of metaphors and especially of writing that was highly metaphorical, and to place our verbal understandings of the world and ourselves on a more rationally clear basis in a language more appropriate to such a requirement.

It is no accident that the period following the religious wars (the mid-seventeenth century) marks the beginning of an interest in dictionaries (whose major goal is to promote accuracy of shared denoted meanings), a revival of interest in Euclidean geometry, developing distrust of political and philosophical arguments based upon scripture, a rising criticism of extravagant rhetorical styles (like those of Shakespeare or John Donne or "enthusiastic" preachers), the beginning of a concerted attempt to understand moral and judicial questions mathematically, and a rising demand for a language as empty of ambiguous metaphor as possible.

We witness this in several writers, above all in Hobbes. As we discussed, Hobbes' major concern in Leviathan is to recommend practices that will minimize a return to the civil chaos of the religious wars and the English Civil War. Hobbes is centrally concerned about language. Over half of Leviathan is concerned with religion, above all with the question of interpretation of scripture. For Hobbes is deeply suspicious of literary interpretation and has a clear preference for the language of geometry, the argumentative style of Euclid - not necessarily because that language provides a true description of the nature of the world (although many people claimed and still claim that it does) but only that a little deductive clarity-based on clear definitions and fundamental principles of deductive logic - can win wide agreement, can, that is, promote social harmony essential to political peace and "commodious living."

The reason for this preference in Hobbes seems clear enough. Metaphorical language breeds arguments over interpretations; Such arguments breed civil quarrels, civil quarrels lead to a break down in public order and foster a return to a state of nature. A different language, one based on the precision of geometry, can foster agreement, because we all can occupy the same understanding if definitions are exact and the correct logic.

One attraction of the new science (although there was considerable argument about this) was that it offered an understanding of the world delivered in the most unambiguous way, in the language of mathematics rather than of scripture. Newton's equations, for those who could follow the mathematics, did not promote the sorts of arguments that arose from, say, the text about Ezekiel making the sun stands still or Moses parting the waters of the Red Sea or God's creating the world in a week. What disagreement or ambiguity’s Newton's explanation contained could be resolved, and was resolved, by a further application of the method he displayed (in the "normal science," as Kuhn calls it, which took place in the generations after Newton).

Throughout the nineteenth century, the rising success of the new science was delivering on the promise of an exact description of the world. The application of this spirit of empirical observation and precise, unambiguous description to an understanding of history and morality, of the sort offered by Karl Marx, set up the hope of a triumph of the language of philosophy (as defined earlier) over the language of poetry (in spite of the objections of the Romantics).

It was an alluring vision, because it promised to lead, as Hannah Arendt points out, to the end of traditional political argument. Since we would all have a full and shared understanding of the way a just state really does work, we wouldn't need to argue about it any more than we argue about the Pythagorean Theorem. Anyone could govern, since governing, traditionally the most challenging task in human affairs, would be simply a matter of applying known and agreed upon rules, something a technician could do. As Lenin observed, governing would be for cooks, because the truths of political life would be expressed in a language coherent to anyone, a language that did not require interpretation of any sort.

There was an enormously arrogant confidence or, if we think about classical tragedy, of hubris about this, especially between some scientists and social scientists, who firmly believed that there are various contentious moral, political, and scientific questions would soon be settled for all time. The future of physics, said.

A. Mitchelton in 1894, in so that it persists concisely of little more than "adding a few decimal places to results already known."

Nietzsche, as we have already seen, sets his sights firmly against such a confidence that language, any language, can provide an accurate description of the Truth. That was, in the nature of things, impossible, because language is inherently metaphorical, it coincides to some invented fiction, with a history, a genealogy, a contingent character.

For Nietzsche, the belief that the sort of language developed by Euclid or the new science with its emphasis on precision and logical clarity - is somehow "true to nature" is, like beliefs that any system is true, plainly incorrect. All language is essentially poetry, inherently metaphorical, inherently a fabrication. Those who, like so many scientists, make claims that their descriptions of the world are true or even more accurate than alternative languages are simply ignorant of the metaphorical nature of all language.

In other words, for Nietzsche there is no privileged access to a final definitive version of life, the world, or anything else, and thus no privileged language for achieving such knowledge. Truth is, in Nietzsche's pregnant phrase, "a mobile army of metaphors," a historical succession of fictions, which does not, as Kant and Marx claimed, reveal any emerging higher truth, like progress or the march to a final utopia or a growing insight into how reality really works. In Nietzsche's view of language there is no final text available to us; There is only interpretation, or, more accurately, an unending series of freshly created interpretations, fresh metaphors.

Thus, as Rorty has observed, Nietzsche is announcing the end of the ancient war between poetry and philosophy by indicating that all we have in language is metaphor. We were mistaken in believing that the language of Euclid was anything but of another than what appears as fiction. It is not. Therefore, it has no special preeminence as the language most appropriate to a description of reality.

Since there is no privileged language and since accepting as true any inherited system of metaphor is limiting one to a herd existence, our central purpose is the construction of new metaphors, the assertion of new values in a language we have made ourselves. Thus, central to Nietzsche's vision of how the best human beings must live their lives is the insistence that individuals must create for themselves a new language, fresh metaphors, original -descriptions. To escape the illusions of the past, to release the arrow in flight, these activities are linked to the creative ability to construct in one's life and language new metaphors.

Therefore, under the influence of this idea, a major part of the cultural imperative of the Twentieth Century artist has been a craze for originality, something that has produced a bewildering succession of styles, schools, experiments. When we explore Hughes', one of the first impressions is the almost overwhelming range of different subject matters, different styles, the pressure, even in the context of a single artist's life, constantly to invent new perspectives, new - descriptions, new ways of metaphorically presenting one's imaginative assertions, in Nietzsche's phrase, one's will to power.

The same is true in many aspects of art: in prose style, in poetry, in architecture, in music, and so on. The influence of Nietzsche on this point (which is, as it has been argued, as an extension of one stream of Romanticism) has been pervasive. This phenomenon has had some curious results.

First, the constant emphasis on individualist - assertion through new metaphors has made much art increasingly esoteric, experimental, and inaccessible to the public, for the Nietzschean imperative leaves no room for the artist's having to answer to the community values, styles, traditions, language, and so on. Thus, the strong tendency of much modern art, fiction, and music to have virtually no public following, to be met with large-scale incomprehension or derision.

This, in turn, has led to a widening split between many in the artistic community and the public. Whereas, in a great deal of traditional art, the chief aim was to hold up for public contemplation what the artist had to reveal about the nature of his vision (e.g., public statues, church paintings, public musical recitals, drama festivals), in the twentieth century the emphasis on avant garde originality has increasingly meant that much art is produced for a small coterie who thinks of them as advanced in the Nietzschean sense-emancipated from the herd because only the privileged can understand and produce such "cutting-edge " metaphors. The strong connections between much "radical" modern art and intellectual elitism characteristic of an extreme right wing anti-democratic ideologies owe much to Nietzsche's views, since the aristocratic elitism of Nietzsche's aesthetic links it easily enough to political systems seeking some defence of "aristocratic" hierarchies (even if the understanding of Nietzsche is often skimpy at best).

Therefore, as Hughes points out, there has been a drastic decline in much high quality public art. To be popular, in fact, becomes a sign that one is not sufficiently original, a sign that one's language is still too much derived from the patois of the last people. There is still much public art, of course, especially in state architecture and market-driven television, but, as Hughes points out, the achievements in these fields are generally not impressive and may not be improving. Some, the art that commands the attention of many artists these days is increasingly private.

In the universities, Nietzsche has, rightly or wrongly, becomes the patron saint of those who believe that novelty is more important than coherence or commitment to anything outside a rhetorical display of the writer's own originality. To object that this ethos produces much irrational individualistic spouting is, its defenders point out, simply to miss the point. The creative joy of affirmation through new language is the only game in town, and traditional calls for scientific scholarship or social criticism on Marx's model are simply reassertions of dogmatism. There are some English departments now, for example, where in the job descriptions, the writing’s one has to produce for tenure can include confessional autobiography; in effect, to produce an aphoristic description, whether that is at all interesting or not, qualifies one as a serious academic scholar and teacher in some places.

Given that most of the society, including those who are maintaining the traditional scientific and economic endeavour launched in the Enlightenment, pays this sort of talk very little attention, finding most of it hard to grasp, there is thus a widening gap between much of what goes on in our society and many of its leading artists and intellectuals. The legacy of Nietzsche may cheer them up, and, in variously watered down versions, especially on this side of the Atlantic, he clearly gives them license to be strident while declaring their own superiority, but just what he offers by way of helping to cure this dichotomy (if it needs to be cured) is a question worth exploring.

The philosophical problem of reflective thought, the conditions of Mind reflecting it, of consciousness observing its own actions and processes. The dilemma of Goedel's theorem regarding referential systems, can be overcome by applying a transcendent thinking method. This higher thought provides complete knowledge of the system, but only if the individual mind is surpassed and merged with the universal mind that allows reflective thought to be perfectly legitimate. To reach true objectivity of mind means leaving the subjective mind behind, and with it, the object-subject dualism so inveterate in our ordinary thought.

How is it possible that consciousness can observe consciousness It? How is it possible to think reflectively at all? Can we take a stance outside consciousness to observe it? Can we think about thinking per se? Can we observe thought processes, which are generally performed unconsciously? Is it possible to examine consciousness or mind with consciousness or mind of it?

These questions have often influenced exaggerated skepticism or to a negative criticism concerning the limitation of our knowledge about our mind. Some even say, which because of the fact that we have no other means of investigating consciousness than consciousness it, this can never lead to a complete understanding of consciousness. Advocates of this view come mostly from the scientific field. Science tries to objectivise its subject matters, so that they can take a stance outside the object and look at it. The means of investigation within experimental sciences are always to mean independence of the object, although this situation must be restrained to the field of classical physics. In Quantum physics, however, experiments cannot be measured without the observer as a conscious living being. As a crucial point, it can be stated generally, that we can have completed knowledge of an object only when we are independent and outside it at the moment of observation.

The problem of completeness of knowledge is encountered when you leave the rigid field of natural sciences. Any attempts to apply the completeness theorem to social sciences, such as psychology and sociology are doomed inevitably, because in those sciences, the object of investigation is identical with the investigator. A psychologist, for example, cannot investigate the psychical processes of another individual in the way a natural scientist investigates physical processes.

First of all, psychic events are not describable as to physical properties and therefore seem evasive. Second, we deal here with a much more complex structure than we ever meet in the physical world. This complexity entails necessary incompleteness. The structure we deal here is not only more complex but also is what we call consciousness or mind. Here we have the identity of the object and its investigator, which was absent in natural sciences. So, are we human beings ever able to know what consciousness and mind really are or are we left forever in the dark and allowed only partial knowledge?

The answer to this question depends on our current understanding of what consciousness or mind is. If we reduce mind to a set of physical properties or equal it with emergent properties of the brain (materialistic and epiphenomenalistic view), we are held to believe, that it will one day is possible to know everything about consciousness. Ever more, however, scientists leave the terrain of a mere materialistic or reductionistic view of the mind and come to the conclusion, that mind is more than the sum of the brain's physical properties or more than a complex structure that emerged from the brain during the evolution of the human being. There are a lot of arguments against the reductionism of mind.

If we tend to believe that consciousness and mind are more than physicalism probably cannot describe, we are still left with the question whether we will be able to resolve this uncertainty of knowledge concerning the nature of our mind. The ordinary view of consciousness is, that it is local to every individual. If we take this as a fact, we will never be able to explain consciousness completely, because now we ran into Goedel's Theorem of the incompleteness of any - referential system.

In brief, Goedel's theorem states that for any formal system there is certain referencing assertion about the system that cannot be evaluated as either wholly true or false. They remain insoluble for our human reasoning. This paradox is originally attributed to the Cretan Epimenides who presented the statement "I am lying" for being undecidable concerning truth or falsity. If it is true that I am lying, then the statement is false, and if it is false, that I am lying, then the statement is true.

This theorem sets a considerable limitation to our reasoning and thus to the ability of investigating our own consciousness or mind. It says, that we cannot make any generally accepted assertions about our mind since it is mind it that asserts something about the mind. It can therefore not decide with certainty or finality whether any statements about our mind or consciousness are logically and factually true or false. This point is only eligible if we uphold the position, which in order to acquire a complete and consistent knowledge of something, we have to be outside it, independent of it, at least formally. We can observe cells or atoms, they are part of our body, but we do not watch cells by means of cells, or atoms by means of atoms. To comprehend a system fully, we have to transcend it, by objectifying it. Only then is it open to analysis. To understand the physical world, we do not have to undertake strenuous efforts to transcend the system, because we as complex living organisms are already in a state of transcendence in relation to inanimate systems. The same applies to biological systems insofar as we are human beings have furnished the highly complex functions of consciousness, and, are again, already in a higher state than a mere biological system, even such as our body. That is not true when considering the next higher system after biology: Consciousness and mind. Where is the next higher level, from which we can study the mental system as we studied the physical and biological system from mind? Is there anything higher than mind? Can we enter supra-consciousness to study normal consciousness?

If there is something like higher consciousness or a supra-individual mind then Goedel's theorem is resolved, since then it will become possible to decide with certainty any -referential assertions. What is more important, we are enabled, from this higher point of view, to have a complete knowledge of our ordinary consciousness or mind? This would be a revolution in modern science, such as was the Copernican Revolution or Relativity Theory or Quantum Physics: In would say, the greatest revolution of humankind until now. There would be an unlimited expansion of consciousness, of faculties of mind and with that of our knowledge of the world and ourselves.

Contemporary theologies are unquestionably in a state of crisis, perhaps the most profound crisis that Christian theology has faced since its creation. This crisisis specifies in three areas? (1) in the relation of a dogmatic theology to its biblical ground, a crisis posed by the rise of a modem historical understanding; (2) in the relation of Theology to the sensibility and Existenz of contemporary man, a crisis created by the death of God; and (3) in the relation of the community of faith to the whole order of social, political and economic institutions, the collapse generated a crisis as of Christendom. In intend to focus upon the second of these areas, although it can only be artificially isolated from the other two. Furthermore, we will simply assume the truth of Nietzsche’s proclamation of the death of God, a truth that a contemporary theology has thus far ignored or set aside. This means that we will understand the death of God as a historical event: God has died in our time, in our history, in our existence. The man who chooses to live in our destiny can neither know the reality of God’s presence nor understand the world as his creation; Or, at least, he can no longer respond either interiorly or cognitively to the classical Christian images of the Creator and the creation. In this situation, an affirmation of the traditional forms of faith becomes a Gnostic escape from the brute realities of history.

Sören Kierkegaard founded A modern Theology, as we will understand it: Founded not simply in response to the collapse of Christendom, but more deeply in response to the arrival of a reality that was wholly divorced from the world of faith, or, as Kierkegaard saw, a reality created by the negation of faith. While employing the Hegelian categories of the "universal" and the "objective" for understanding the new reality created by modern man, Kierkegaard came to understand the modern consciousness as the product of a Faustian choice. Modern philosophy is, as Kierkegaard argued in The Sickness Unto Death, simply paganism, its really secret being: "cogito ergo sum” - In think is to be; Whereas the Christian motto, on the contrary, is: "As thou believest, so art thou; To believe is to be." Here, cogito and credo are antithetical acts: Modern or "objective" knowledge is not religiously neutral, as so many theologians have imagined; it is grounded in a dialectical negation of faith. Again, to know "objectively" is to exist "objectively." Such existence is the antithetical opposite of the "subjectivity" which Kierkegaard identified as faith. With the birth of objective knowledge, reality appeared as an objective order, and God was banished from the "real" world. However, for Kierkegaard, who was living at a moment when Christian subsistence was still a possibility, it was not only God but also the concretely existing individual who was banished from the world of the "universal." Already, in Fear and Trembling, the minor themes that “. . . the individual is incommensurable with reality threatens the major theme of the ‘knight of faith, that, . . . subjectivity is incommensurable with reality.’ So radical is this incommensurability that the existing individual and objective reality now exist in a state of dialectical opposition: to know objectively is to cease to exist subjectively, to exist subjectively is to cease to know objectively. Moreover, it was precisely Kierkegaard’s realization of the radically profane ground of modern knowledge that made possible his creation of a modern Christian mode of dialectical understanding. Existence in faith is antithetically related to existence in objective reality; now faith becomes subjective, momentary and paradoxical, least of mention, existence in faith is existence by virtue of the absurd. Why the absurd? Because faith is antithetically related to ‘objectivity,’ . . . therefore, true faith is radical inwardness or subjectivity, it comes into existence by a negation of objectivity, and can only maintain it by a continual process, or repetition, of negating objectivity.

Kierkegaard’s dialectical method is fully presented in the Postscript, but it was a method destined never to be fully evolved. Quite simply the reason that this method never reached completion is that it never - despite his initial effort in Fear and Trembling - moved beyond negation. Although biographically his second conversion or “metamorphosis hardened Kierkegaard’s choice of a negative dialectic," a conversion that led to his resolve to attack the established church, and therefore to abandon philosophy, it is also true that he could limit faith to a negative dialectical movement because he could identify faith and "subjectivity." In the Postscript, subjective thinking is "existential," and ". . . passion is the culmination of existence for an existing individual." Nonetheless, "passion" is radical inwardness, and true inwardness is "eternity" (an identification first established in The Concept of Dread). To be sure, "eternity" is a subjective and not an objective category, and therefore it can only be reached through inwardness. Nevertheless, the crucial point is that Kierkegaard could identify authentic human existence with existence in faith. Kierkegaard knew the death of God only as an objective reality: Indubitably, it was "objectivity" that had created by its means ion the death of God. Accordingly, the negation of objectivity makes faith possible, and since "objectivity" and "subjectivity" are antithetical categories, it follows that faith can be identified with "subjectivity." Today we can see that Kierkegaard could dialectically limit "objectivity" and "subjectivity" to the level of antithetical categories because he still lived in a historical time when subjectivity could be known as indubitably Christian. Less than a hundred years later, it will be little less than blasphemy to identify the truly "existential" with existence in faith. However, in Kierkegaard’s time the death of God had not yet become a subjective reality. So authentic human existence could be understood as culminating in faith, the movement of faith could be limited to the negation of "objectivity," and no occasion need arise for the necessity of a dialectical coincidence of the opposites. Yet no dialectical method can be complete until it leads to this final coincidental oppositorum.

If radical dialectical thinking was reborn in Kierkegaard, it was consummated in Friedrich Nietzsche: The thinker who, in Martin Heidegger’s words, brought an end to the metaphysical tradition of the West. His most important work, Sein und Zeit, 1927, in English as, Being and Time, 1962, clears the space for the quest for Being and only a favoured few have any hope of recapturing oneness with Being. Especially belief in the possibility of escaping from metaphysics and returning into an authentic communion with independent nature, least of mention, saying anything about Being as this is difficult, so what in effect replaces it is peoples’ own consciousness of their place in the world, or of what the world is for them (their Dasein), which then becomes the topic. Before its central themes had become, they became the staple topics of ‘existentialism’, they had a more sinister political embodiment: Heidegger became more inclined to a kind of historical fatalism, and is sometimes seen as an heir to the tradition of Dilthey. Heidegger’s continuing influence is due at least in part to his criticism of modernity and democracy, which he associates with a lack of respect for nature independent of the uses to which human beings put it. However, he has also been hailed (notably by Rorty) as a proponent of ‘pragmatism’, and even more remarkable many French intellectuals have taken him as a prophet of the political left. When he writes that “from a metaphysical point of view, Russia and America are the same, the same dreary technological frenzy, the same unrestricted organization of the average person” (An Introduction to Metaphysics, 1953) forging that his contempt for the mass culture of the industrial age springs from nationalistic and middle-class élitism, rather than from any left-wing or egalitarian illusions.

Nietzsche’s proclamation of the death of God shattered the transcendence of Being. No longer is there a metaphysical hierarchy or order that can give meaning or value to existing beings (Seiendes); as Heidegger points out, now there is no Sein of Seiendes. Nietzsche was, of course, a prophetic thinker, which means that his thought reflected the deepest reality of his time, and of our time as well; For to exist in our time is to exist in what Sartre calls a "hole in Being," a "hole" created by the death of God. However, the proclamation of the death of God - or, more deeply, the willing of the death of God - is dialectical: a No-saying to God (the transcendence of Sein) makes possibly a Yes-saying to human existence (Dasein, total existence in the here and now). Absolute transcendence is transformed into absolute immanence: It’s positive actualization has characterized the particularized occupancy to a position of the Here and Now. Only, by ways of post-Christian existential "now-nesses," are we drawn into ourselves, if only in those powers that were once bestowed upon and beyond: Consequently, Nietzsche’s vision of Eternal Recurrence is the dialectical correlate of his proclamation of the death of God, least of mention, that since death is the cessation of life, it, . . . cannot be experienced, nor be harmed nor a proper object of fear. So, at least, have argued many philosophers, notably Epicurus and Lucretius. A prime consideration has been the symmetry between the state of being dead, and the state of ‘being’ not yet in existence. On the other hand death is feared, and thought of as a harm (even if it is instant: it is not the process of dying that make the difference). The alternative, immortality, sounds better until the detail is filled, when it can begin to sound insupportable. The management of death is one of the topics of ‘bioethics’. All in the same, the assertion that God is dead, but that we have to vanquish his shadow, first occurs in Nietzsche’s “The Gay Science.” Nietzsche tells of the madman who hails it as the greatest achievement of mankind, to have killed God and turned the churches into tombs and sepulchers of God. Nevertheless, people do not listen to the madman for ‘the deed is still more distant from them than the most -distant stars - and yet they have done it themselves’.

. . . Everything goes, everything comes back; eternally rolls the wheel of being. Everything dies, and everything blossoms again; Eternally runs the year of being. Everything breaks, and everything is joined anew; Eternally the same house of being is built. Everything parts, but everything greets every other thing again, least of mention, that the eternal ring of being remains faithful to it. In every NOW, being begins; Round every here roll the sphere. There. The centre is everywhere. Bent is the path of eternity.

Only when God is dead can Being begin in each now. Eternal Recurrence is neither the cosmology nor a metaphysical idea: it is Nietzsche’s symbol of the deepest affirmation of existence, of Yes-saying. Accordingly, Eternal Recurrence is a symbolic portrait of the truly contemporary man, the man who dares to live in our time, in our history, in our existence. Have enslaved man into the alienation of "being" and to the guilt of "history." Yet now the contemporary Christian can rejoice because the Jesus whom our time has discovered is the proclaimer of a gospel that makes incarnate a Kingdom reversing the order of "history" and placing in question the very reality of "being." Perhaps we are at last prepared to understand the true uniqueness of the Christian Gospel.

The history of religions teaches us that Christianity stands apart from the other higher religions of the world on three grounds: (1) Its proclamation of the Incarnation, (2) its world-reversing form of ethics, and (3) the fact that Christianity is the only one of the world religions to have evolved or, in some decisive sense, to have initiated a radically profane form of Existenz. Christendom imagined that the Incarnation meant a non-dialectical (or partial) union of time and eternity, of flesh and Spirit; by that it abandoned a world-reversing form of ethics and ushered in the new age of an absolutely autonomous history (profane Existenz). What we know as the traditional image of the Incarnation is precisely the means by which Christendom laid the grounds for the fatefully willing death of God, for this traditional image made possibly the sanctification of "time" and "nature," a final sanctification leading to the transformation of eternity into time. If this process led to the collapse of Christendom, it nevertheless is a product of Christendom, and faith must now face the consequences of a non-dialectical union of time and eternity. Is a form of faith possible that will affect a dialectical union between time and eternity, or the sacred and the profane? Already we can see significant parallels between Nietzsche’s vision of Eternal Recurrence and Jesus’ proclamation of the Kingdom of God. By accepting "Being begins in every now" as the deepest symbolic expression of contemporary Existenz, we can see that modern profane existence knows a form of the Incarnation. Like its New Testament original, the profane form of the Incarnation isolates authentic existence from the presence of "being" and "history," and it does so dialectically. The Yes-saying of Eternal Recurrence dawns out of the deepest No-saying, and only when man has been surpassed will "Being" begin in every "Now." Let us also note that modern Existenz has resurrected a world-reversing form of ethic, e.g., in Marx, Freud, Kafka, and in Nietzsche him. May the Christian greet our Existenz as a paradoxical way through which he may pass to eschatological faith? Surely this is the problem that the crisis of theology poses for us today.

The aforementioned, as we have attempted to portray Nietzsche's fundamental thought - the eternal returns of the same - in its essential import, in its domain, and in the mode of thinking that is expressly proper to the thought it, that is, the mode demanded by the thought as such. In that way we have laid the foundation for our own efforts to define Nietzsche's fundamental metaphysical position in Western philosophy. The effort to circumscribe Nietzsche's fundamental metaphysical position shows that we are examining his philosophy as for the position assigned it by the history of Western philosophy until now. At the same time, this means that we are expressly transposing Nietzsche's philosophy to that sole position in which it can and most unfold the forces of thought that are most proper to it, and this from inescapable confrontation with prior Western philosophy as a whole. The fact that during our presentation of the doctrine of return we have come to cognize the region of thought that must necessarily and preeminently take precedence in every fruitful reading and appropriating of Nietzschean thought may be an important gain; yet when viewed for the essential task, namely the characterization of Nietzsche's fundamental metaphysical position, such a gain remains merely provisional.

We can probably define Nietzsche's fundamental metaphysical position in its principal traits if we ponder the response he gives to the question concerning the constitution of being and being's way to be. Now, we know that Nietzsche offers two answers regarding being as a whole: Actualized wholeness is willed top power, and being as a whole is eternal occurrence of the same. Yet philosophical interpretations of Nietzsche's philosophy have up too now been unable to grasp these two simultaneous answers as answers, are, in fact, answers that necessarily cohere, because they have not recognized the questions to which these answers pertain; That is to say, prior interpretations have not explicitly developed these questions because of a thoroughgoing articulation of the guiding question. If, on the contrary, we approach the matter as to the developed guiding question, the word “is apparently" in these two major statements -being as a whole is willed to power, and being as a whole is eternal recurrence of the same in each case suggests something different. To say that being as a whole "is" eternal recurrence of the same means that being as a whole is, for being, in the manner of eternal recurrence of the same. The determination in the "will to power" replies to the question of being with respect to the latter's constitution, in that for the determination forwarded to the "eternal recurrence of the same" that it replies to the question of being with its own respectful manner, to its ways that it is to be. Nonetheless, constitution and manner of being do cohere as determination of the beingness of beings.

Accordingly, in Nietzsche's philosophy will to power and eternal recurrence of the same belongs together. It is thus right from the start a misunderstanding -better, an outright mistake of metaphysical proportions when commentators try to play off will to power against eternal recurrence of the same, and especially when they exclude the latter together from metaphysical determinations of being. In truth, the coherence of both must be grasped. Such coherence is it essentially defined by the coherence of the constitution of beings also specifies in each case their way to be - keeping steadfast in their peculiarities, only for which they bear their own proper grounds.

What fundamental metaphysical position does Nietzsche's philosophy assume for it because of its response to the guiding question within Western philosophy that is to say, within metaphysics?

Nietzsche's philosophy is the end of metaphysics, since it reverts to the very commencement of Greek thought, taking up such thought in a way that is peculiar to Nietzsche's philosophy alone. In this way Nietzsche's philosophy closes the ring formed by the very course of inquiry into being as such as a whole. Yet to what extent does Nietzsche's thinking revert to the commencement? When we realize this question, we must be clear about one point at the very outset: Nietzsche hardly recovers the philosophy of the commencement in its pristine form. But, it is nonetheless, shown in the attendance of what is presently a matter of the reemergence of the essential fundamental positions of the commencement in a transformed configuration, in such a way for these positions interlock.

What are the decisive fundamental positions of the commencement? In other words, what sorts of answers are given to the yet undeveloped guiding question, the question what being, is?

The one answer -roughly speaking, it is the answer of Parmenides- tells us, that being is. An odd sort of answer, no doubt, yet a very deep one, since that very response determines for the first time and for all thinkers to come, including Nietzsche, the meaning of ‘is and Being’ - permanence and presence, that is, the eternal present.

The other answer - roughly speaking, that of Heraclitus - tells us that being becomes. The being is in being by virtue of its permanent becoming. It’s -unfolding and eventual dissolution.

To what extent is Nietzsche's thinking the end? That is to say, how does it stretch back to both these fundamental determinations of being so that they come to interlock? Precisely to the extent that Nietzsche argues that being is as fixated, as permanent, and that it is in perpetual creation and destruction. Yet beings are both, not in an extrinsic way, as one beside another; rather, being is in its very ground perpetual creation (Becoming), while as creation it needs what is fixed. Creation needs what is fixed, first, to overcome it, and second, ion order to have something that has yet to be fixated, something that enables the creative to advance beyond it and be transfigured. The essence of being isortsighted, depending on perspective repeating a small scale, as it was, the tendency of the whole. What all life exhibits, to be observed as a reduced formula for the universal tendency: Hence a new grip on the idea "life" as will to power. Instead of "cause and effect," the mutual struggle of things that becomes, often with the absorption of the opponent: The enumeration of things of becoming non-constant. Inefficacy of the old ideals for interpreting the whole of occurrence, once one has recognized their animal origins and utility, all of them that are contradicting life.

Inefficacy of the mechanistic theory-gives the impression of

Meaninglessness. The entire idealism of humanity until now is about to turn into nihilism - into belief in absolute worthlessness, which is to say, senselessness. Annihilation of ideals, the new desert, the new arts, by means of which we can endure it, amphibians’ presupposition: Bravery, patience, no "turning back" not hurrying forward. (Zarathustra, always parodying prior values, based on his own abundance.)

What is this receiving, in which whatever becomes comes to be being? It is the reconfiguration of what becomes as its supreme possibilities, a reconfiguration in which what becomes is transfigured and attains subsistence in it’s very dimensions and domains. This receiving is a creating. To create, in the sense of creation out beyond one, is most intrinsically this: to stand in the moment of decision, in which what has prevailed hitherto, our endowment, is directed toward a projected task. When it is so directed, the endowment is preserved. The "momentary" character of creation is the essence of actual, actuating eternity, which achieves its greatest breadth and keenest edge as the moment of eternity in the return of the same. The receiving of what becomes into being - will to power in its supreme configuration - is in its most profound essence something that occurs in the "glance of an eye" as eternal recurrence of the same. The will to power, as constitution of being, is as it is solely from the way to be which Nietzsche projects for being as a whole: Will to power, in its essence and according to its inner possibility is eternal recurrence of the same?

The aptness of our interpretation is demonstrated unequivocally in that very fragment that bears the title "Recapitulation." After the statement we have already cited - "To stamp Becoming with the character of Being - that is the supreme will to power" - we soon read the following sentence: "That everything reverted may bring the close’s approximation of a world of Becoming to one of Being: peak of the meditation." Saying it in a more lucid fashion would scarcely be possible, first, how and on what basis the stamping of Being on Becoming is meant to be even and precisely during the period when the thought of will to power appears to attain preeminence, remains the thought that Nietzsche's philosophy things without a cease.

Nevertheless, we ought to pay close attention to the phrases that follow the god's name in these titles: "Philosophy of eternal return," or simply "philosophos."

Such phrases suggest that what the word’s Dionysos and Dionysian mean to Nietzsche will be heard and understood only if the "eternal return of the same" is thought. In turn, which eternally recurs as the same and in such wise is, that is, perpetually presences, has the ontological constitution of "will to power." The mythic name Dionysos will become an epithet thought through in the sense intended by Nietzsche the thinker only when we try to think the coherence of "will to power" and "eternal returns of the same.” That means only when we seek those determinations of Being that from the outset of Greek thought guides all thinking about being as such and as a whole. (Two texts that appeared several years ago treat the matters of Dionysos and the Dionysian: Walter F. Otto, Dionysos: Myth and Cult, 1933. Karl Reinhardt, "Nietzsche's 'Plaint of Ariadne, ‘" in the journal Die Antike, 1935. Heidegger's original manuscript from the summer of 1937 does not show these paragraphs. Surprisingly, there is no extant, Abschrift or typescript of this course; nor is the typescript that went to the printer in 1961 available for inspection. As a result, the date of the passage remains uncertain. My own surmise is that Heidegger added the note not long after the semester ended, the reference to students questions and to those tow works on Dionysos that had recently been published make it highly unlikely that the note was added as late as 1960-61. The work’s Heidegger refers us to are of course still available - and is still very much wroth reading. Walter F. Otto, Dionysos: Mythos and Kultus (Frankfurt am Main: V. Klostermann, 1933): Reinhardt's Nietzsche's “Klage der Ariadne, appears now in Karl Reinhardt, Vermachtrus der Antike Gesammelte Essays zur Philosophie und Geschichtsschreiburg, edited by Carl Becker (Gottingen: Vandernhock & Ruprecht, 1960).

Nietzsche conjoins in one both of the fundamental determinations of being that emerge from the commencement of Western philosophy to wit, being as becoming and being as permanence. That ‘one’ is his essential thought - the eternal recurrence of the same.

Yet can we designate Nietzsche's way of grappling with the commencement of Western philosophy as an end? Is it not rather a reawakening of the commencement? Is it not therefore it a commencement and hence the very opposite of an end? Nonetheless Nietzsche's fundamental metaphysical position is the end of Western philosophy. For what is decisive, is not that the fundamental determinations of the commencement are conjoined and that Nietzsche's thinking stretches back to the commencement, what is, metaphysically essential it the way in which these things become known? The question is whether Nietzsche reverts to the incipient commencement, to the commencement as a commencing. Here our answer must be: no, he does not.

Neither Nietzsche nor any thinkers before him - even and especially not that one who before Nietzsche first thought the history of philosophy in a philosophical way, namely, Hegel - revert to the incipient commencement. Rather, they invariably apprehend the commencement in the sole light of a philosophy in decline from it, a philosophy that arrests the commencement - to wit, the philosophy of Plato. Here we cannot demonstrate this matter in any detail Nietzsche him quite early characterizes his philosophy as inverted Platonism. However, the inversion does not eliminate the fundamentally Platonic position. Rather, precisely because it seems to eliminate the Platonic position, Nietzsche's inversion represents the entrenchment of that position.

What remains essential, however, is the following: when Nietzsche's metaphysical thinking reverts to the commencement, the circle closes. Yet because it is the already terminated commencement and not the incipient one that prevails there, the circle it grows inflexible, loses whatever of the commencement it once had. When the circle closes in this way, it no longer releases any possibilities for essential inquiry into the guiding question. Metaphysic - treatment of the guiding question - is at an end. That seems a bootless, comfortless insight, a conclusion that like a dying tone signals ultimate cessation. Yet this is not so.

Because Nietzsche's fundamental metaphysical position is the end of metaphysics in the designated sense, it performs the grandest and most profound gathering - that is, accomplishment - of all the essential fundamental positions in Western philosophy since Plato and in the light of Platonism. It does so form within some fundamental position remains an actual, actuating fundamental metaphysical position only if it in turn is developed in all its essential forces and regions of dominion in the direction of its counterpoison. For thinking that looks beyond it. Nietzsche's philosophy, which is inherently a turning against what lies behind it, must it become a forward-looking counter position. Yet since Nietzsche's fundamental position in Western metaphysics constitutes the end of that metaphysic, it can be the counter position. For our other commencement only if the later adopts a questioning stance compared with the initial commencement - as one that in its proper originality is only now commencing. After everything we have said, the questioning intended here can only be the unfolding of a more original inquiry. Such questioning must be the unfolding of the prior, all-determining, and commanding question of philosophy, the guiding question, "What is being?" out of it and out beyond it.

Nietzsche once chose a phrase to designate what we are calling his fundamental metaphysical position, a phrase that is often cited and is readily taken as a way to characterize his philosophy armour factum, love of necessity. Yet the phrase expresses Nietzsche's fundamental metaphysical position only when we understand, the two words armour and factum - and, above all, their conjunctions about Nietzsche's own-most thinking, only when we avoid mixing our fortunately familiar notions into it. Often enough, In have asked my if In am not more profoundly indebted to the most difficult years of my life than to any of the others. What my innermost nature instructs me is that all necessity - viewed from the heights, about an economy on a grand scale - is also what is inherently useful: one should not merely put up with it, one should love it . . . Armour fati: That is, the innermost nature. Nietzsche repeats the formula twice in An Ecce Homo, the first time as the ultimate explanation of his "discernment.”

"My formula for greatness in a human being armour fati - love of necessity: That one does not will to have anything different, and not to be placed forward or back nor in any which way that proves immeasurably eternal. Not merely to bear necessity, though must less to cloak it - all Idealism is mendacity in the face of necessity - but to love." Nietzsche, Ecce Homo.

Nietzsche had first cited the formula six years earlier, at the outset of Book IV of The Gay Science, as the very essence of affirmation”: In want to learn better how to see the necessity in things as what is beautiful - in that way In will become one of those who make things beautiful. Armour fati: let this be my love from now on.

He had written to Franz Overbeck, also in 1882, that he was possessed of "a fatalistic trust in God which he preferred to call armour fati. He boasted, "In would stick my head down a lion's throat, not to mention. . . . " The fullest statement concerning Amor fati, however appears from spring-summer, although the note as a whole. The action produced by instincts e merit reprinting, and rereading, the following extract contains the essential lines. Nietzsche explains that his experimental philosophy, which aims to advance beyond nihilism to the very opposite of nihilism.

To a Dionysian yes-saying to the world as it is, without reduction, exception, or selection; it wants eternal circulation - the same things, the same logic and dialogic of implication. Supreme state to which a philosopher may attain; taking a stand in Dionysian fashion on behalf of existence. A formula for this is armour fati.

Amor - love - is to be understood as will, the will that wants whatever it loves to be what it is in its essence. The supreme will have this kind, the most expansive and decisive will, is the will as transfiguration. Such a will builds and exposes what it wills in its essence to the supreme possibilities of its Being.

The thinker explores its being as a whole and as such, in that, the world for-itself might be conceived of as such. Thus with his very first step he always thinks out beyond the world, and so at the same time back to it. He thinks in the direction of that sphere within which a world becomes the world. Whenever that sphere is not incessantly called by name, called aloud, wherever it is held silently in the most interior questioning, it is thought most purely and profoundly. For what is held in silence is genuinely preserved, as preserved it is most intimate and actual. What to common sense looks like "atheism," and has to look like it, is at bottom the very opposite. In the same, wherever the matters of death and of nothingness is treated. Being and Being alone is thought most deeply - whereas those who ostensibly occupy themselves solely with "reality" flounder in nothingness.

Atheism, is the denial of or lack of belief in the existence of a god or gods. The term atheism comes from the Greek prefix à, meaning “without,” and the Greek word theos, meaning “deity.” The denial of god’s existence is also known as strong, or positive, atheism, whereas the lack of belief in god is known as negative, or weak, atheism. Although atheism is often contrasted with agnosticism-the view that we cannot know whether a deity exists or not and should therefore suspend belief-negative atheism is in fact compatible with agnosticism.

Atheism has wide-ranging implications for the human condition. In the absence of belief in god, ethical goals must be determined by secular (nonreligious) aims and concerns, human beings must take full responsibility for their destiny, and death marks the end of a person’s existence. As of 1994 there were an estimated 240 million atheists around the world comprising slightly more than 4 percent of the world’s population, including those who profess atheism, skepticism, disbelief, or irreligion. The estimate of nonbelievers increases significantly, to about twenty-one percent of the world’s population, if negative atheists are included.

From ancient times, people have at times used atheism as a term of abuse for religious positions they opposed. The first Christians were called atheists because they denied the existence of the Roman deities. Over time, several misunderstandings of atheism have arisen: that atheists are immoral, that morality cannot be justified without belief in God, and that life has no purpose without belief in God. Yet there is no evidence that atheists are any less moral than believers. Many systems of morality have been developed that do not presuppose the existence of a supernatural being. Moreover, the purpose of human life may be based on secular goals, such as the betterment of humankind.

In Western society the term atheism has been used more narrowly to refer to the denial of theism, in particular Judeo-Christian theism, which asserts the existence of an all-powerful, all-knowing, all-good personal being. This being created the universe, took an active interest in human concerns, and guides his creatures through divine disclosure known as revelation. Positive atheists reject this theistic God and the associated beliefs in an afterlife, a cosmic destiny, a supernatural origin of the universe, an immortal soul, the revealed nature of the Bible and the Qur'an (Koran), and a religious foundation for morality.

Theism, however, is not a characteristic of all religions. Some religions reject theism but are not entirely atheistic. Although the theistic tradition is fully developed in the Bhagavad-Gita, the sacred text of Hinduism, earlier Hindu writings known as the Upanishads teach that Brahman (ultimate reality) is impersonal. Positive atheists reject even the pantheistic aspects of Hinduism that equate God with the universe. Several other Eastern religions, including Theravada Buddhism and Jainism, are commonly believed to be atheistic, but this interpretation is not strictly correct. These religions do reject a theistic God believed to have created the universe, but they accept numerous lesser gods. At most, such religions are atheistic in the narrow sense of rejecting theism.

One of the most controversial works of 19th-century philosophy, Thus, Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.









In the Western intellectual world, nonbelief in the existence of God is a widespread phenomenon with a long and distinguished history. Philosophers of the ancient world such as Lucretius were nonbelievers. Even in the Middle Ages (the 5th thru into the 15th century) there were currents of thought that questioned theist assumptions, including skepticism, the doctrine that true knowledge is impossible, and naturalism, the belief that only natural forces control the world. Several leading thinkers of the Enlightenment (1700-1789) were professed atheists, including Danish writer Baron Holbach and French encyclopedist Denis Diderot. Expressions of nonbelief also are found in classics of Western literature, including the writings of English poets Percy Shelley and Lord Byron; English novelist Thomas Hardy; French philosophers’ Voltaire and Jean-Paul Sartre; Russian author Ivan Turgenev; also, included is the American writer’s Mark Twain and Upton Sinclair. In the 19th century the most articulate and best-known atheists and critics of religion were German philosopher’s Ludwig Feuerbach, Karl Marx, Arthur Schopenhauer, and Friedrich Nietzsche. British philosopher Bertrand Russell, Austrian psychoanalyst Sigmund Freud, and Sartre are among the 20th century’s most influential atheists.

Nineteenth-century German philosopher Friedrich Nietzsche was an influential critic of religious systems, especially Christianity, which he felt chained society to a herd morality. By declaring that “God is dead,” Nietzsche signified that traditional religious belief in God no longer played a central role in human experience. Nietzsche believed we would have to find secular justifications for morality to avoid nihilism - the absence of all belief.

Atheists justify their philosophical position in several different ways. Negative atheists attempt to establish their position by refuting typical theist arguments for the existence of God, such as the argument from first cause, the argument from design, the ontological argument, and the argument from religious experience. Other negative atheists assert that any statement about God is meaningless, because attributes such as all-knowing and all-powerful cannot be comprehended by the human mind. Positive atheists, on the other hand, defend their position by arguing that the concept of God is inconsistent. They question, for example, whether a God who is all-knowing can also be all-good and how a God who lacks bodily existence can be all-knowing.

Some positive atheists have maintained that the existence of evil makes the existence of God improbable. In particular, atheists assert that theism commonly defends the existence of evil by claiming that God desires that human beings have the freedom to choose between good and evil, or that the purpose of evil is to build human character, such as the ability to persevere. Positive atheists counter that justifications for evil in terms of human free will leave unexplained why, for example, children suffer because of genetic diseases or abuse from adults. Arguments that God allows pain and suffering to build human character fail, in turn, to explain why there was suffering among animals before human beings evolved and why human character could not be developed with less suffering than occurs in the world. For atheists, a better explanation for the presence of evil in the world is that God does not exist.

In an Enquiry Concerning Human Understanding (first published in 1748 under a different title), Scottish philosopher David Hume offers several criticisms of religious belief, including an argument against belief in miracles. According to Hume, testimony about the occurrence of miracles should be subjected to rational standards of evidence.

Atheists have also criticized, but historical evidence used to support belief in the major theistic religions. For example, atheists have argued that a lack of evidence casts doubt on important doctrines of Christianity, such as the virgin birth and the resurrection of Jesus Christ. Because such events are said to represent miracles, atheists assert that extremely strong evidence is necessary to support their occurrence. According to atheists, the available evidence to support these alleged miracles-from Biblical, pagan, and Jewish sources -is weak, and therefore such claims should be rejected.

Atheism is primarily a reaction to, or a rejection of, religious belief, and thus does not determine other philosophical beliefs. Atheism has sometimes been associated with the philosophical ideas of materialism, which holds that only matter exists: Communism, with which it asserts that religion impedes human progress, and rationalism, for which of emphasizing analytic reasoning over other sources of knowledge. However, there is no necessary connection between atheism and these positions. Some atheists have opposed communism and some have rejected materialism. Although nearly all contemporary materialists are atheists, the ancient Greek materialist Epicurus believed the gods were made of matter in the form of atoms. Rationalists such as French philosopher René Descartes have believed in God, whereas atheists such as Sartre are not considered to be rationalists. Atheism has also been associated with systems of thought that reject authority, such as anarchism, a political theory opposed to all forms of government, and existentialism, a philosophic movement that emphasizes absolute human freedom of choice; There is however no necessary connection between atheism and these positions. British analytic philosopher A. J. Ayer was an atheist who opposed existentialism, while Danish philosopher Søren Kierkegaard was an existentialist who accepted God. Marx was an atheist who rejected anarchism while Russian novelist Leo Tolstoy, a Christian, embraced anarchism. Because atheism in a strict sense is merely a negation, it does not provide a comprehensive world-view. It is therefore not possible to presume other philosophical positions to be outgrowths of atheism.

Intellectual debate over the existence of God continues to be active, especially on college campuses, in religious discussion groups, and in electronic forums on the Internet. In contemporary philosophical thought, atheism has been defended by British philosopher Antony Flew, Australian philosopher John Mackie, and American philosopher Michael Martin, among others.

Supremely thoughtful utterance does not consist simply in growing taciturn when it is a matter of saying what is properly to be said; it consists in saying the matter in such a way that it is named in nonsaying. The utterance of thinking is a telling silence. Such utterance corresponds to the most profound essence of language, which has its origin in silence. As one in touch with telling silence, the thinker, in a way peculiar to him, rises to the rank of a poet, yet he remains eternally distinct from the poet, just as the poet in turn remains eternally distinct from the thinker. Everything in the hero's sphere turns to tragedy. Everything in the demigod’s sphere turns to play and in God’s sphere turns to . . . to what? "World" perhaps? Erschweigen, an active or telling silence, is what Heidegger elsewhere discusses under the rubric of sigetics (from the Greed sigao, to keep silent). For him it is the power "logic" of a thinking that looks into are made into.

In the months before his final descent into madness, Friedrich Nietzsche made the following declaration and prediction: "In know my destiny. Someday my name will be associated with the memory of something tremendous, a crisis like no other on earth, the profoundest collision of conscience, a decision conjured up against everything believed, required, and held sacredly up to that time. In am not a man; In am dynamite."

So he was. The man who practised and perfected the art of "philosophizing with a hammer," who pronounced that "God is dead," who called on his readers to follow him in exploring regions "beyond good and evil," who gleefully declared him the Antichrist, who unconditionally denounced human equality and democracy, who claimed that "a great war hallows any cause," who praised the "blond beast" who "might come away from a revolting succession of murder, arson, rape, [and] torture with a sense of exhilaration and emotional equilibrium, as if it were nothing but a student prank"-this man was indeed explosive. One might even say that today, more than one hundred years after European intellectuals discovered his work, Western culture has yet to come to terms with the fallout produced by the detonation of his most volatile ideas.

In the epilogue to his Nietzsche: A Philosophical Biography, Rüdiger Safranski catalogues the philosopher's influence, and it reads like a comprehensive intellectual history of the twentieth century. The irrationalist vitalism that helped to inspire fascism, artistic movements from symbolism to art nouveau, expressionism, and Dada, wherefore Ernst Jünger's high-spirited militarism and Heideggerian existentialism, also an antimodernism for which the Counter-Enlightenment critical theory of the postwar Frankfurt School, began its vicious surrealism of Georges Bataille, and through him, the varying postmodern irrationalisms of Michel Foucault and Jacques Derrida: The neopragmatic conviction that "truth is an illusion that helps us cope with life" - as, these and many other radical cultural, intellectual, and political movements descend directly from Nietzsche: They are his legacies to our time.

For some-primarily those who take their intellectual bearings from outside the thorough Nietzscheanized humanities departments of the modern university, and the handful of conservative dissenters within them-there will be little in this legacy of atheistic immoderation to admire. However we judge the often decadent productions of twentieth-century high culture, and Nietzsche him continues to merit the most serious attention, and not merely because of his considerable influence. The fact remains that Nietzsche is one of the most brilliant philosophers and prose stylists in the history of Western letters. His formidable challenge to so much that so many of us continue to hold dear cannot simply be ignored by thoughtful men and women.

Yet how ought we to approach the task of evaluating Nietzsche's work? The answer is far from clear. For Nietzsche is a deeply contradictory thinker, and glancing at the dozens of books devoted to his thought in the philosophy section of any good bookshop, it can seem that there are, in fact, many of Nietzsche. Most scholars have assumed that his work amounts to a defence of radical right-wing politics, but many today think him more compatible with the far left. His books contain many misogynistic passages, but that has not discouraged feminists from claiming to find support for their program in his ideas. Some think his teaching is meant to inspire public actions, but many others have seen in his writing an aesthetic calls to private cultivation and creativity. Competent scholars have declared that his work is hopelessly incoherent, while at least one leading philosopher has claimed that Nietzsche was the "last great metaphysician in the West." Then there are those who think that Nietzsche's texts can and should mean anything to which their readers want them. This abundance of interpretations makes any attempt to render an informed and comprehensive judgment of his work exceedingly difficult.

Safranski also is a master of what might be called philosophical narration, drawing on just the right amount of detail from Nietzsche's personal background and historical milieu to provide a context for his philosophy while rarely allowing those details to overshadow the ideas that form the core of Nietzsche's life.

The Nietzsche that emerges from Safranski's study is a man who, from his teenage years until his mental collapse at the age of forty-five, tirelessly devoted his formidable intellect to making sense of the world about its intrinsic meaninglessness. The case of Nietzsche thus presents us with the peculiar spectacle of a philosopher who began his intellectual life, not from a position of openness to an elusive truth not yet grasped, but than from an unshakable conviction that he had already found it -and that all of the human experiences and history had, had to be reconceived in its light.

Friedrich Wilhelm Nietzsche was born on October 15, 1844 in the small village of Röcken, Germany. His father, Pastor Karl Ludwig Nietzsche, died five years later of "softening of the brain," leaving Nietzsche to be raised (along with his sister Elisabeth) by his mother, Franziska, and two unmarried aunts. The young Nietzsche was both intellectually precocious and astonishingly -absorbed. He wrote his first philosophical essay, "On the Origin of Evil," at the age of twelve. By thirteen, he had written his first autobiography. He would go on to write eight more over the next ten years, each of them concluding that, in Safranski's words, "his life was exemplary."

Despite Nietzsche's early penchant for an aggrandizement, -a tendency that would mark all of his written work-both he and his family believed for some time that he would follow in his father's footsteps to become a pastor. However, at some point between 1859 and 1861, while Nietzsche attended an elite boarding school, he began to break decisively with his faith. Although he asserted in his 1859 autobiography that "God has guided me safely in everything as a father would be his weak little child," by May 1861 he had concluded that the idea of God was, in Safranski's words, "unfathomable," because there were simply "too much intense injustice and evil in the world."

Others quickly followed these first tentative steps away from Christianity. In an essay composed on his Easter vacation in 1862, the seventeen-year-old Nietzsche would wonder "how our view of the world might change if there were no God, immortality, Holy Spirit, or divine inspiration, and if the tenets of millennia were based on delusions." Safranski explains how this thought quickly generated a series of puzzles that would set Nietzsche's philosophical agenda for the rest of his life: "Might that we have been 'led astray by a vision' for such a long time? What kinds of reality are left behind once religious phantasms have been taken away?"

Over the next few years, Nietzsche would wrestle with his suspicion that all received truths are illusory. Although he had planned to study theological and classical philology at the University of Bonn when he arrived there in the fall of 1864, he dropped his concentration in Theology after a single semester. By the following summer, he would write to his sister that, although continuing it believing in the comforting tales of their youth would be easy, "the truth is not necessarily in a league with the beautiful and the good." On the contrary, he wrote, the truth can be "detestable and ugly in the extreme."

From this point on, Nietzsche would devote his life to breaking from-and then reflecting on how people might thrive after having left behind"the first and last things." Early in his university education, Nietzsche thought of him as continuing the work of the philosopher Arthur Schopenhauer, whom he described as his "liberator" from dogma and tradition. As Safranski writes, Schopenhauer confirmed Nietzsche's youthful intuition that "the inner nature of the world is based not on reason and intellect but on impulses and dark urges, dynamic and senseless." "True life," Schopenhauer claimed, is pure "will," which "roars behind or underneath it." The challenge was learning how to live because of the truth that all apparent meaning and purpose in life is in fact an illusion. At first Nietzsche was intrigued by Schopenhauer's own proposal-the - negation of the will, culminating in quasi-Buddhistic peace and passivity - but he soon rejected it on the grounds that it amounted to an attitude of defeat in the face of "nothingness." Nietzsche longed to find a way to love and affirm life, despite its meaninglessness.

Such concerns preoccupied his thinking as he continued his education in classical philology under the renowned scholar Friedrich Ritschl, first at Bonn, and then at the University of Leipzig. So impressed was Ritschl by his student that in 1869 he recommended Nietzsche for a professorship at the University of Basel before he had completed either his dissertation or postgraduate thesis-an honour as rare in the nineteenth century as it is today. When Nietzsche finally produced a monograph, The Birth of Tragedy (1872), the expectations were thus very high among his colleagues. They did not anticipate that Nietzsche would completely forsake the scholarly norms of the philological profession to write a highly speculative, even revolutionary account of ancient Greek culture that his own existential fixations largely inspired.

All of Nietzsche's work begins from the assumption that, viewed in it, the world is a meaningless and purposeless chaos. As he would write in his notebooks in 1888, less than a year before his mental breakdown, "For a philosopher to say, 'the good and the beautiful are one,' is infamy; if he goes on to add, 'also the true,' one ought to thrash him. Truth is ugly." In the Birth of Tragedy and the shorter essays he wrote in the early and mid-1870s, Nietzsche proposed that human beings "can become healthy, strong, and fruitful" only when they live within an "enveloping atmosphere" that protects them from having to face this ugly truth without mediation. The enveloping atmosphere consists of protective illusions that come to be taken as truths by those who live within its "horizon," which enables them to "endure without being destroyed." Nevertheless, these second-order truths-or "myths"-must not entirely conceal the meaninglessness over which they cover. Rather, the myths must grant partial access to the authentic truth. In its translucence to truth, the mythical horizon allows human beings to both face and "forget" the ugliness in just the right proportions.

The Birth of Tragedy is an interpretation of how the ancient Greeks achieved this balance between truth and untruth more perfectly than any other culture in history and why that balance eventually collapsed; it also suggests how German culture might find an analogous state of equilibrium in modern times. Nietzsche associates the impulses or drives that enabled the Greeks to live and thrive in the partial light of the "terror and horror of existence" with the Olympian gods of Apollo and Dionysus; he claims that in different but complementary ways they made possibly the "continuous redemption" of the "eternally suffering and contradictory" character of the world.

The first of these impulses - the Apollonian responded to the "mysterious ground of our being" by answering our "ardent longing for illusion." It used beauty and artistry, measures and proportion to conceal from the Greeks, at least partially, the "substratum of suffering and of knowledge," and left the individual half-conscious "in his tossing bark, amid the waves" of human existence, in a kind of "waking dream." According to Nietzsche, Sophocles' Antigone, with its stark and yet balanced conflicts between competing duties, stands as a particularly vivid example of the Apollonian in action.

Nevertheless, conceiving it cannot grasp the full accomplishment of Greek tragedy entirely about Apollonian dreams. The contrary Dionysian impulse must complement it, which pulled in a very different direction. In a frenzy of intoxication, which Nietzsche associates with the orgiastic violence of the ancient world's Bacchic festivals, the Dionysian at once exposed the "mysterious primordial unity" from which all things spring and produced ‘complete-forgetfulness’ by individuals. This ‘mystic feeling of oneness’ culminated in a transfiguring experience in which man ‘feels him a god [and] walks about enchanted, in ecstasy, like the theologies he saw walking in his [Apollonian] dreams.’

According to Nietzsche, the Greeks achieved greatness by synthesizing their Apollonian and Dionysian drives in the tragic dramas of Aeschylus and Sophocles. In the greatest of their plays, the Greeks were exposed to the ideal quantities of truth and illusion. In a play such as Oedipus Rex, they were granted a glimpse of the abyss, and yet that glimpse was so artfully presented in "an Apollonian world of images" that their "nausea" was transformed into "notions with which one can live."

Nonetheless, the tragic balance was extremely difficult to maintain. Nietzsche claims that the democratic character, heightened - consciousness, and "cheerfulness" of Euripides' plays signalled that the tragic age of Greece was ending. Yet the deepest cause of its demise could be found elsewhere, in a "newborn demon," whose approach to life so opposed the Dionysian element in Aeschylean tragedy that it was subsequently vanquished from the Greek stage, and from now on from the history of the West. That demon was none other than Socrates.

The middle chapters of The Birth of Tragedy contain what might be the most forceful critique of Socrates since Aristophanes lampooned him in The Clouds during the ancient philosopher's own lifetime. Nietzsche contends that Socrates stood in profound opposition to the "drunken revelry" of tragedy, falsely teaching human beings that "using the thread of causality, [they could] penetrate the deepest abysses of being." Even worse, he taught that "to be beautiful" something must be "intelligible," and that "knowledge is a virtue." The Socratic "theoretical man" lives to uncover the truth at all costs, if doing so will be an unambiguous benefit to people. While the tragedians had understood the importance of the surface of things, the Socratic philosopher, stubbornly and naively convinced of the goodness of truth, pursues it without restraint and the results are catastrophic.

In the first formulation of an argument he will greatly refine in his later work, Nietzsche claims that the philosopher's headlong lunge toward the truth ends up exposing the "lies concealed in the essence of logic." When this happens when the philosopher uncovers the fact that logic is a human construction imposed on the chaos of reality - logic effectively "bites its own tail" and refutes it. In Nietzsche's view, this is exactly what has happened in the hyperlogical culture of the modern world: The theoretical optimism first defended by Socrates had reached a kind of end in which human beings begin to sense the awful truth that its most fundamental premises are fictions. They have thus also begun to grasp (in Nietzsche's own work) the wisdom of the pre-Socratic tragedians, who understood, if only half-consciously, that people "needs art as a protection and a remedy" for truth.

That modern man confronts an unprecedented crisis of meaninglessness is a view that Nietzsche would hold throughout his career. What changed was his account of how it came about and his proposal for how we should respond to it. In his early work, he believes that modern man requires a new "beautiful illusion" to replace the crumbling Socratic culture of the West. This new mythology would serve the same function that the plays of Aeschylus and Sophocles did for the Greeks. When it comes to specifying where we might find a new mythology to accomplish this much needed "rebirth of tragedy," Nietzsche announces with considerable bombast that it will arise from the neopagan, mythopoetic operas of Richard Wagner.

Nietzsche had met Wagner in 1868 and quickly developed an intense friendship with the composer and his wife, Cosima von Bülow. Over the next few years, the three shared their innermost cultural and philosophical hopes with one another-so much so, in fact, that by the time of the publication of his first book, Nietzsche could write to a friend that "In have formed an alliance with Wagner. You cannot imagine how close we are now and how fully our plans mesh." Those plans, unveiled in the final third of The Birth of Tragedy, involved nothing less than the satiation of modern man's spiritual "hunger" by giving him a neotragic horizon within which the "significance of life" could be "redeemed" just as it had been for the pre-Socratic Greeks.

It is hardly surprising that Nietzsche's colleagues greeted his book with a mixture of incomprehension and disdain. Expecting the philological prodigy to produce an exercise in meticulous scholarship, they were shocked to discover that he had chosen instead to issue a rallying cry to cultural revolution. What Safranski fittingly describes as Nietzsche's academic "excommunication" began almost immediately. Over the next few years, he divided his time between convalescing from a series of illnesses, reaching a handful of students he deemed "incompetent," and writing most brilliantly but decidedly nonacademic essays on Schopenhauer, Wagner, David Friedrich Strauss, and "The Benefits and Drawbacks of History for Life." His alienation from academic life finally culminated in his resignation from the University of Basel in 1879. He would spend the next ten years as a nomad travelling throughout Germany, Switzerland, and Italy while devoting him almost entirely to philosophical reflection and writing.

Although Nietzsche's work continued to show signs of Wagner's influence for several years after the publication of The Birth of Tragedy, the two men gradually drifted apart during the 1870s. As Safranski suggests, Nietzsche eventually became disillusioned with his own early proposals to cure modern disillusionment. While Nietzsche once hoped that Wagner could inspire a renewal of meaning and purpose in modernity, by the end of the decade he had come to consider the composer a purveyor of kitsch who embodied the most decadent aspects of modern culture. It is even possible to say that Nietzsche wrote his next major work, Human, All Too Human (1878), to inure him against the kinds of hopes that Wagner's music had inspired in him.

If Nietzsche began his earliest philosophical reflections from the assumption that "truth is ugly"- and that all meaning arises out of a creative attempt to cope with this ugliness-the post-Wagner Nietzsche was, if anything, more radical in his refusal to accept any "metaphysical solace." As before, modern man had fallen into meaninglessness, but now there was no possible redemption from it - and this we were supposed to accept as good news. In Human, All Too Human and Daybreak (1881), and scarcely Voltarean, as Nietzsche exulted in his own capacity to endure with a smile what Pascal had described as the "horror at the infinite immensity of spaces." Not until 1882's The Joyful Science did Nietzsche open upon his developing profundity that characterizes his mature and most justly admired work.

Like its immediate predecessors, The Joyful Science is a collection of numbered aphorisms ranging in length from a few words to several pages. This style, which Nietzsche employs in most of his later works, enables him to shift topics in unpredictable ways. One on art, science, religion, psychology, German Idealism, newspapers, ancient philosophy, Renaissance history, or modern literature might follow an aphorism on politics. Sometimes one aphorism builds on another, producing a sustained argument or interpretation; at other times the jarring juxtaposition between them leads and deliberates disorientation. It is amid the chaotic stream of brilliantly disjointed insights and observations that the reader of The Joyful Science comes upon an aphorism, "The Madman."

Nietzsche begins this one-and-a-half-page masterpiece of modern disenchantment by describing a madman who "lit a lantern in the bright morning hours, ran to the marketplace, and cried incessantly: 'In seek God! In seek God!'" Then, as those in the square gawk and laugh at the lunatic with embarrassed disapproval, he cries out: "Where is God? . . . In will tell you. We have killed him-you and me. All of us are his murderers. . . . God is dead. God remains dead, and we have killed him.

Nietzsche was hardly the first modern figure to espouse atheism. The most radical writers of the Enlightenment suspected that God was a fiction created by the human mind. G.W.F. Hegel famously declared that modernity is "Good Friday without Easter Sunday." Throughout the nineteenth century, a series of authors, from Ludwig Feuerbach and Karl Marx to Charles Darwin, claimed that religion is a human projection onto a spiritually lifeless world. Nietzsche agreed with this tradition in every respect but one. Whereas most modern atheists viewed their lack of piety as an unambiguous good - as a mark of their liberation from the dead weight of authority and tradition - Nietzsche responded to his insight into the amoral chaos at the heart of the world with considerable pathos. If in Human, All Too Human and Daybreak he flirted with the facile cheerfulness so common to his fellow atheists, beginning with an aphorism of The Joyful Science, Nietzsche showed that he now understood with greater depth that the passing of God has potentially devastating consequences for Western Civilization. This is the madman's requiem aeternam deo: But how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Where is it moving now? Where are we moving? Away from all suns? Are we not plunging continually? Backward, sideward, forward, in all directions? Is there still any up or down? Are we not straying as through an infinite nothing? Do we not feel the breath of empty space? Has it not become colder? Is not night continually closing in on us?

If God is dead, then man has completely lost his orientation. There are no human dignity, no equality, no rights, no democracy, no liberalism, and no good and evil. In the light of Nietzsche's insight, a thinker such as Marx looks extraordinarily superficial, railing against religion on the one hand while remaining firmly attached to ideals of justice and equality on the other. He has failed to grasp the simple truth that if God is dead, then nothing at all can be taken for granted-and absolutely everything is permitted.

Still, how could God be dead? The paradox has permeated the idea. If God is who he claims to be, then it is obviously impossible for him to have "bled to death under our knives," as the madman declares. (Of course Christians believe that, as the Son, God did die at our hands, but Nietzsche intends the madman's statements to apply to the triune God in his monotheistic unity.) God may come to be ignored by a world too fixated on earthly goods to notice him, but clearly he is not vulnerable to human malice or indifference. Unless, of course, He never existed in the first place. Perhaps then it would make a kind of poetic sense to speak of God "dying" once people have ceased to believe in him. Here, man would not simply be responsible for killing God, but also for having given birth to him in the first place. Much of Nietzsche's late work defends just such an interpretation, arguing that Western man is equally responsible for creating and destroying God. The most thorough statement of this view can be found in The Genealogy of Morals (1887), which purports to tell the hidden history of morality from its origins to its collapse in the modern age.

At first, there was chaos. All of Nietzsche's books begin from this assumption. The Genealogy departs from those works in asserting that this primordial anarchy consisted of an unfocused, undifferentiated, and purposeless "will to power" that permeated all things. (Whether the will to power merely animates living creatures or acts as a metaphysical force that pervades all of the nature remains unclarified.) The pointless, anarchistic violence that characterized the prehistoric world ended when certain individuals began to focus their will to power on the goal of decisively triumphing over others. When they finally succeeded, these victorious individuals, whom Nietzsche dubs "the strong," foisted the first "moral valuation" onto mankind.

In the strong (or "noble") valuation, the good are nothing other than an expression of what the members of the victorious class do and what they affirm. What they do is triumph ruthlessly over the weak by violence. Likewise, the opposite of the good or the bad - is defined by the convincingly powered, as weakness, or the inability to conquer the strong. Nietzsche illustrates the dynamics of the strong valuation with an infamous image of birds of prey devouring defenceless lambs. The birds of prey do not choose to eat the lambs; There is thus no free will involved and nothing blameworthy about their viciousness. It is simply what they do; what they do is the essence of whom they are; and who they are serves as the measure of good and bad.

Once the meaning of good and bad has been established, a theory of justice grows up on its basis. Justice for the strong amounted to a simple sense of proportionality: when an individual incurs a debt, he must discharge it by repaying it and submitting to retributive punishment. Nietzsche implies that, for the strong, facing wrongdoing and accepting punishment was largely a matter of honour, so in societies governed by the noble valuation justice was usually meted out quickly and brutally.

The preconditions were now in place for the birth of the gods. In Nietzsche's view, polytheistic religions emerged out of the stories that the strong told themselves about their long-forgotten, prehistoric origins. First, they imagined that the founders of their community were just like them, only stronger - and they developed rituals of sacrifice that enabled them to express gratitude and discharge imagined debts to these founders. Then, as their community grew in power and extent over time, the founders that the strong projected onto the past became even stronger. Eventually, the founders became thought of as gods, who served as noble ideals for the strong to emulate as they sought to cultivate their power and cruelty.

According to Nietzsche, it was within this context of divinely sanctioned oppression that an epochal "transvaluations of values" took place. This "slave revolt in morality" began when the weak-out of what Nietzsche calls their ressentiment and their "spirit of revenge" against the strong-started to teach a series of radically new and ingenious ideas. To begin with, they claimed for the first time that there is such a thing as free will, so the brutal actions of the strong, far from being simply "what they do," came to be understood as the result of a choice. The weak then likewise asserted that their own failure to triumph over the strong was a result of the choice to refrain from such actions, rather than an inability to do so. For the slavish revolutionaries, “sin” tempts all human beings to engage in "evil," and the strong are noteworthy above all else for their decision to embrace and even encourage such behaviour, while the weak define their lives by the struggle to resist it. Thus it comes to be that what was formerly considered bad-namely, weakness - is christened as the highest good, while the formerly good-namely, strength - is transformed into evil.

In this way, the slaves (obviously the Jews and their Christian descendants) fashioned a life-denying "ascetic ideal" to replace the life-affirming valuation of the strong. Along with it comes the notion of a new kind of deity - God above all other gods, to whom each of us owes a debt - an "original sin" -so great that we are powerless to discharge it on our own, without his gratuitous gift of redeeming grace. Unlike the gods of the strong, who behaved like outsized brutes whose cruelty served as an attainable ideal for the strong to emulate, the God of the slaves is so transcendently good that all attempts to approximate his holiness inevitably fall short. Far from serving as a healthy ideal, then, the ascetic God ends up negating the world and everything in it, including human beings, by his very existence.

The ascetic ideal that gives birth to God is thus much more complicated than the valuation that preceded it. Whereas the noble valuation grew out of and enhanced the - affirmation of the strong, the slaves believe an ideal that denigrates pride and therefore seeks to diminish and humiliate, yet it, like all valuations, arises from out of and its will to power. As Nietzsche writes in Human, All Too Human, "Man takes positive pleasure in violating him with excessive demands and afterwards idolizing this tyrannically demanding something in his soul. In every ascetic morality, man worships one part of him as a god and in doing so demonizes the other part." In the Genealogy, Nietzsche describes this violent "-, splitting" as an example of how "life" can turn "against life," and, in turn, actually enhance life in new and interesting ways. In seeking to attain the impossible-to become "worthy" of a God whose goodness transcends the world-the ascetic slave directs his own will against it, and thus creates a wholly new form of cultural life founded on guilt and bad conscience. It is a culture of psychological depravity, as individuals, tutored by a new ruling class of priests, come to despise themselves, and never so much as when they begin to experience the least bit of happiness or success.

The priest helps to relieve or avoid the depression caused by the helplessness and homelessness of those unable to express their will to power more directly. For the herd, turning aggression against in the context of the ascetic ideal, than leading to depression, actually relieves or helps to avoid depression related to helplessness and hopelessness. For Nietzsche every sufferer instinctively seeks a cause for his suffering more exactly an agent, still more specifically a guilty agent who is susceptible to suffering - in short, some thing upon which he can, on some pretext or other, vents his effect, actually or in effigy, for the venting of his effect represents. This constitutes the actual physiological cause of ressentiment, vengefulness, and the like: A desire to deaden pain by means of effects.

While Nietzsche is aware of a course in which the individual avoids looking into him and finds an enemy on which to vent his affects, the ascetic priest also helps the suffering individual to seek the cause of his suffering ‘in him, in some guilt, in a piece of the past, he must understand his suffering as a punishment’. The resulting ‘orgy of feeling’ (which would include - pity) is the most effective means of deadening dull, paralysing, protracted pain’. Aggressive drives are also satisfied for the priest and the herd in the fantasies and beliefs about the fate of unbelievers and others who opposes them. Nietzsche refers to Aquinas’ words: ‘The blessed in the kingdom of heaven will see the punishment of the damned in order tat their bliss be more delightful for them’. There is also the more earthbound project of infecting nobler type’s wit bad conscience.

Nietzsche goes on to discuss how philosophers themselves have utilized the ascetic priest as a model, with the ascetic ideal providing a form through which to think, when they posit timeless, changeless, perfect realms of being in relation to which absolute truth is attained and is valued absolutely, animal nature is transcended for pure spirit and death avoided. Nietzsche acknowledges considerably of modern scholarship and science (in the broadest sense of the word as pertaining to various disciplines of contemporary scholarship) as ‘the latest and noblest form’ of the ascetic ideal. (The ascetic priest form of the ascetic ideal is not characterized as noble, perhaps, as White suggests, due to its slave morality and condemnation of sensuality.) Nietzsche writes of philosophers that ‘they all pose as if they had discovered and reached the real opinions through the - development of a cold, pure, divinely unconcerned dialectic.

According to Nietzsche, the faith in truth, in the absolute value of truth, is this metaphysical value, stands or falls with the ascetic ideal. Such faith, with its ‘unconscious imperative’ involves ‘the desire to keep something hidden from one: Science as a means of -narcosis, do you have experience of that? (Of course this does not mean that science must function as -narcosis) The rigid and unconditional’ faith in truth commits one to ‘that venerable philosophers’ abstinence . . . that desire to halt before the factual the factum brutum. . . . That general renunciation of all interpretation’. Among the things thus kept hidden is that what we regard as knowledge involves interpretation - however, from the moment faith in the God of the ascetic ideal is denied, a new problem arises: That of the value of truth [not the possibility of the ruth] . . . the value of truth must for once be experimentally called into question.

In the Genealogy Nietzsche writs on the origin of morality and of the origins and maintenance of civilization as inextricably links with the suppression and then regression instincts, the direction of these instincts turned inward, and ‘internalized of man, with particular emphasis on internalized guilt, including its use for power and control by the likes of the ascetic priest. To adhere in addition of, was for Nietzsche, one of the ways to contain of a bad conscience; develop on the historical plane is by the ‘masters’ or ‘blond beasts of prey’ violently expelling freedom and imposing from upon the ‘slaves’ with the result that the ‘instinct for freedom [is] forcibly made latently - this instinct for freedom pushed back and repressed, incarcerated within the finally able to discharge and vent it only on it: That, and that alone, is what the bad conscience is in its beginning. In this same passing, Nietzsche even goes so far as to state that this initial disaster. . . . Precluded all struggle and even all ressentiment.

Both Jung and Freud were well aware of Nietzsche’s analyses of the ascetic ideal and the ascetic priest (who differs in some ways from the more secluded anchorite) and of his conception of sublimation, including sublimated sexuality and will to power. Nietzsche, writes both of attempts at extirpation of the drives an of how, for example, ‘in Paul the priest wanted power’ and used concepts and symbols to tyrannize, power is sought more than one as well as or others. Nietzsche also specifically wrote of the ‘men and women of sublimated sexuality [who] have made their find in Christianity,’ Freud points out that the anchorite is not one who has necessarily withdrawn his libido into him, but may have found pathology (losing the contact with reality results from such an introversion of the libido.

Paul is a ‘great man’ in Nietzsche’s eyes, and there may even be an identification with him as Nietzsche refers to both Paul’s idea and his own eternal recurrence with the phrase ‘idea of ideas’. Nietzsche considers Pau asa type of ascetic priest who, in the words of Salaquarda, is strong enough ‘to channel the ‘will to nothingness’ of the decadents for a time into another direction. But he also had a hatred of Paul, a hatred of what he felt was Paul’s life - negating attitude toward the things of this earth, particularly his attitude toward the ’flesh’ (or should, one say Paul was not life affirming in a manner Nietzsche would regard creatively and more affirmative than his provisional life-affirming approach channel the will to nothingness?

Nietzsche's account of how the ascetic ideal gives birth to God is ingenious. Still, no less so is his narrative of how it leads to God's death, and its own - destruction. Nietzsche's narrative derives much of its shock effect from the fact that it so profoundly contradicts the dominant story of the rise of modern science, in Nietzsche's time and ours. While modern intellectuals typically argue that science arose opposing the Church, Nietzsche considers science to represent the "perfection" of the same ascetic ideal that originally gave birth to Christianity.

In Nietzsche's view, an unwavering belief in the goodness marks science of truth - and the conviction that one reaches this truth by negating the world in a way that is similar to, but much more radical than, the method employed by Christianity. Christianity claims, for example, that sin stains human life and then negates the former by calling on the righteous to overcome the latter. Nevertheless, science goes much further in its negation of the world, to deny the distinction - or, at least to stress the similarities - between man and "lower" entities. Biology reduces us to the level of other organisms, chemistry tells us that we consist of the same elements as inanimate objects, and physics underlines the continuity between human beings and all the matter in the universe. In the light of modern science, the differentiation of the human world into kinds of things lacks a foundation in the natural world. Science thus dissolves the distinctions that generate meaning as for being possible.

Of course most professional scientists do not follow through so rigorously on the implications of their approach to understanding the world, but that is irrelevant to Nietzsche. What matters to him is that an ethic permeates modern Western culture of ascetic reductionism that seeks to tear down all existing cultural structures. One need not work in a laboratory to further the ascetic ideal. On the contrary, as we learn toward the end of the Genealogy, Nietzsche understands his own thought to represent the ultimate consummation of the ascetic ideal at the moment at which "science" unmasks it as the perfection of the ascetic ideal, and, in turn, discovers that this ideal is an arbitrary valuation projected onto reality to derive a sense of purpose in the face of chaos. It is in this way that the ascetic ideal manages both to give birth to and then to kill the Christian God.

Nietzsche thus concludes the Genealogy as he began The Birth of Tragedy, by asserting that, when faced with the ugly truth of things, humans respond by producing illusions that come to be taken as true-until they are eventually exposed for the lies that they are. The Genealogy adds the twist that this very process is said to be driven by the character of the lies in which Western man has believed. That is, the ascetic ideal is a lie that eventually demands its own exposure as a lie. As Nietzsche writes in the penultimate aphorism of the Genealogy, Unconditional honest atheism . . . is the awe-inspiring catastrophe of two thousand years of training in truthfulness that finally forbids it the lie involved in belief in God.’

How are we to respond to the complete collapse of the moral valuation that has reigned for two millennia? Nietzsche offers no answer in the Genealogy, which ends as it began - with meaningless chaos. Other works are somewhat more helpful, however. The speech of the "madman" from The Joyful Science, for example, provides a hint. Shortly after declaring that we have killed God, the madman asks a series of rhetorical questions: How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?

Here Nietzsche shows that the death of God requires that we take his place by becoming a race of gods. The meaning of this extraordinary suggestion is elaborated most fully in Thus Spake Zarathustra (1883-1885), easily the most difficult book in Nietzsche's corpus.

In one of the most fascinating passages of his biography, Safranski recounts how Nietzsche first came to the idea of writing Zarathustra by way of a quasi-revelatory experience of inspiration near the Surlej boulder in the Upper Engadine mountains of Switzerland on August 6, 1881. There, on the shores of an alpine lake, Nietzsche felt as though he were "a mere incarnation, a mere mouthpiece, a mere medium of overpowering forces." The religious character of his experience is fitting, for the book he was inspired to write stands as Nietzsche's answer to the Bible. It tells the story of a man named Zarathustra, who, at the age of thirty, "left his home . . . and went into the mountains" for a life of complete solitude. Then, ten years later, he resolves to return to civilization, to share his incomparable wisdom with humanity.

Upon his return he discovers that, although his fellow human beings are oblivious to the fact that "God is dead," His passing has begun to have significant detrimental effects on people. Among the most memorable passages in Zarathustra is the account of the "last man," who, in God's absence, believes he has "invented happiness." This last man no longer strives for anything great, he is too cautious to stand out from the "herd," he consumes various "poisons" to ensure an "agreeable sleep" and an "agreeable death," and he looks back on all of human history with a smug sense of his own superiority. Such a man is one step away from becoming so "poor and domesticated" that he will no longer "shoot the arrow of his longing beyond man." Without a God to look up to, man is on the verge of becoming less than human.

Yet ours is not an age for despair. As Nietzsche's Zarathustra declares as he gazes in disgust at the last man, "The time has come for man to set him a goal. The time has come for man to plant the seed of his highest hope." The death of God therefore presents, in addition to great dangers, an extraordinary opportunity. While we may very well become subhuman, we may also transform ourselves into something superhuman. Thus does Zarathustra describe his purpose: "In teach you the Overman." Combining the Social Darwinism so common in the late nineteenth century with his own unique brand of anthropo-theological speculation, Nietzsche's Zarathustra announces that "man is something that will be overcome."

What is the ape to man? A laughing stock or a painful embarrassment, and man will be just that for the Overman: A laughing stock or a painful embarrassment. You have made your way from worm to man, and much as its still the worm that lives . . . Man is a rope tied between beast and Overman-a rope over an abyss. Dangerously across, a dangerous on-the-way, a dangerous looking-back, a dangerous shuddering and stopping. What is great in man is that he is a bridge and not an end.

Man, then, is poised to evolve into a god through his own efforts. Still, what will make possible such a monumental transformation? The answer stretches out in the most peculiar doctrine of Nietzsche's philosophy: The "eternal recurrence of the same," which he first (and most lucidly) presented in an allegorical aphorism of The Joyful Science titled "The Greatest Weight." It is worth quoting in its entirety: What if some day or night a demon were to steal after you into your loneliest loneliness and say to you, "This life as you now live it and have lived it, you will have to live again and innumerable times more. There will be nothing new in it, but every pain and every joy and every thought and sigh and everything immeasurably small or great in your life must return to you, all in the same succession and sequence - even this spider and this moonlight between the trees, and even this moment and In my. The eternal hourglass of existence is turned upside down again, and you with it, a speck of dust!" Would you not throw your down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him, "You are a god, and never have In heard anything more divine." If this thought found its possession of you, it would change you as you are, or perhaps crush you. The question in each thing, "Do you desire this again and innumerable times more?" Would lie upon your actions as the greatest weight? Or how well disposed would, but you have to become to your and to life to crave nothing more fervently than this ultimate eternal confirmation and seal?

While this passage makes it sound as if the doctrine of the eternal recurrence serves as a quasi-mythical Kantian postulate-proclaiming that we should act as if it were true despite knowing that it is not - Safranski shows that Nietzsche experienced a kind of euphoria upon discovering what he thought was definitive scientific evidence for its reality and truth. Apparently Nietzsche believed that the finite amount of matter and energy in the universe, combined with its temporal infinity, implied (in Safranski's words) that "all possible events concerning both the animate and the inanimate realms have already taken place, and . . . will recur without end."

No matter whether Nietzsche considered the doctrine to be scientifically verifiable or merely a substitute for the neopagan Wagnerian myths he embraced in his youth, there can be no doubt that he thought of it as the key to man's absolute affirmation of him and the world - and even (what may amount to the same thing) his own - divination. As the allegory of the demon makes clear, Nietzsche believed that if human beings could come to incorporate the eternal recurrence into their view of the world-to view every second of their lives as a moment worthy of being repeated infinite times, rather than as a prelude to a truer or better world to come-they would, in effect, confer the dignity of the eternal onto this world. As Safranski writes, "All the ecstasy, all the bliss, all the ascensions of feeling, all the hunger for intensity previously projected into the beyond would now be concentrated in the immediate life of the here and now. Preserving the powers of transcendence designed the doctrine of the eternal recurrence to function for immanence or, as Zarathustra proclaimed, remaining 'faithful to the earth."

Still, what about the past? Even assuming that we could come to believe in the truth of the eternal recurrence, would we not face the dilemma that, as Martin Heidegger put it, each of us is "thrown" into a world we did not create? Whereas our present and future emerge, at least to some extent, out of our choices, our past is given to us. Nevertheless, Nietzsche appears to have believed that once we had affirmed our present and future, affirmation of our past would follow in its wake. After all, if the person I am today is worthy of affirming for all eternity, so, then, the person that must have been me was once to happen in that I must be equally worthy, since my past made my present possible. When I begin to think of the many ways that mine is this way, In not only accept the necessity of my fate and its role in making me who In am, but In also come to love that fate (Amor fati). In fact, my affirmation of my own past can expand to such an extent that I would begin to act as if I could will it. When that happens, my will comes to fill the entire meaningful universe-past, present, and future. In such a world, man has definitively replaced God. Or, as Nietzsche's Zarathustra puts it in a cryptic but crucially important passage: . . . as creator, guessers of riddles, and redeemer of accidents, In taught them to work on the future and to redeem with their creation all that has been. To redeem what is past in man and to recreate all "it was" until the will says, "Thus I willed it so.” “Thus I will it not" -, this particular I is called redemption, and this, and in this alone, I taught them to call redemption.

Nietzsche wanted nothing less than to make us totally at home in the world, and he understood that this monumental task could be accomplished only by convincing us, least of mention, in that we possess the power to redeem it, all by ourselves, without God.

Nietzsche devoted the final years of his sanity to thinking through the conundrums generated by his antitheological angriness. For some time he hoped to present a systematic summary of the views he first sketched in Thus Spake Zarathustra. However, the book he envisioned, tentatively titled The Will to Power: Attempt at a Revaluation of All Values, was not to be. Although he produced a flood of aphoristic and increasingly hyperbolic books between 1886 and 1888-Beyond Good and Evil, The Genealogy of Morals, Twilight of the Idols, The Antichrist, the autobiographical Ecce Homo, and hundreds of pages of notebook entries that have been subsequently (and somewhat deceptively) published as The Will to Power -his, Greatest achievements never became real.

Yet we have reason to think that Nietzsche came to believe, in his madness, that he had attained the divination for which he longed. In January 1889, just after his hysterical collapse in the streets of Turin at the sight of a carriage driver beating a horse, and a few weeks before being institutionalized in a psychiatric clinic, Nietzsche wrote a letter to the esteemed historian Jacob Burckhardt, in which he declared that "in the end In would much rather be a Basel professor than God: Yet I have not undertaken to embrace of my own private egotism, in that, if, and only if, its guiding crescendo, for which that it may be, in that, I would renounce the beingness of man from the creation of the world." Then there was the letter to a friend, Peter Gast, containing a single sentence: To my maëstro Pietro: Sing me a new song: the world is transfigured. All the heavens are full of joy. The Crucified. Nietzsche went on to live eleven years in a semicatatonic state, dying in 1900, on the threshold of a century that he had predicted would be the one worldwide war and unprecedented violence.

Ever since he slipped into a psychosis, it has been a Commonplace for romantic interpreters of Nietzsche's life and thought to conclude that he, like Novalis, Friedrich Hölderlin, and many other modern philosophers, poets, and artists, were driven mad by his own heroic efforts to grasp the truth in all of its horror. For these admirers, Nietzsche deserves to be considered a less martyr to thinking in its purist form. Besides the fact that such an interpretation simply dismisses the theory accepted by most scholars - namely, that an advanced case of syphilis-it caused Nietzsche’s breakdown also accepts without question that Nietzsche was right to think that the truth stands radically opposed to the beautiful and the good. Since nearly every word he ever wrote flows from this assumption, any attempt to evaluate Nietzsche's work on the whole must first and courageously confront it head on.

Unfortunately, Safranski contributes little to such a confrontation. At some points he offers the banal observation that the “will motivates Nietzsche’s books to an unceasing adventure in thinking." At others, he ventures a more creative, but no less unhelpful, suggestion that Nietzsche should have consistently advocated a "bicameral system of culture." Building on an image Nietzsche employed in Human, All Too Human Safranski suggests that conceiving of a culture in which is possible on Nietzschean grounds "one chamber [is] heated up by the passions of genius while the other [is] cooled off with principles of common sense and balanced out with collective pragmatism." Safranski believes that if Nietzsche had endorsed such a twofold conception of truth - one for radical artist-philosophers, another for moderate practical men - he could have pursued his adventure in thinking without "abandoning the idea of democracy and justice.

As appealing as Safranski's proposal might sound as enabling to achieve for we are to have, as it was, the best of both worlds-it has many problems. To begin with, as Safranski points out, Nietzsche would have judged the attempt to hold on to any form of democratic morality an example of the "feeble compromise [and] indecisiveness" that he associates with the nihilistic "last men." Then there is the more fundamental difficulty that in Nietzsche's thought everything flows from his conviction that the truth is meaningless chaos and flux. For Nietzsche, being two equally valid truths is simply impossible for there; There can only be the ugly truth it and the noble lies that mask it to one degree or another. Although in places Nietzsche does suggest an aristocratic arrangement in which an elite of philosophic geniuses pursues the truth while their slaves go about their lives immersed in illusions, one assumes that this is not what Safranski has in mind.

However, if Safranski's explicitly critical suggestions do not help us to assess Nietzsche's ideas, he does prepare more philosophically of the serious reckoning with them by showing so clearly that atheistic meaninglessness is the premise, rather than the conclusion, of his thought. How can we begin to evaluate this Nietzschean antifaith? We find a compelling suggestion in the thought of Nietzsche's early unbeatable opponent, Socrates. In two of Plato's dialogues, Socrates confronts characters who espouse proto-Nietzschean views. For both Thrasymachus in the Republic and Callicles in the Gorgias, morality has no foundation in the order of things, which is utterly indifferent to human concerns, and justice is nothing other than "the rule of the stronger." The parallels to Nietzsche's view, especially as he articulates it in the Genealogy, are uncanny.

It is instructive that in examining the opinions of these sophistical antimoralists, Socrates does not attempt to refute them using logic or empirical evidence of one kind or another. Rather, he takes what might be called a psychological approach. He attempts to show them that they are less consistently opposed to the good than they profess themselves to be. In Thrasymachus, for example, Socrates' dialectical questioning reveals a fundamental tension in his soul. On the one hand, Thrasymachus believes that "might makes right"-that the victor in a struggle for power demonstrates that he deserves his victory in the very act of winning it. However, on the other hand, he admires the intelligence and cunning that enable certain individuals to triumph over others-so much so, in fact, that he finds the thought of an unintelligent man winning power to be deeply distasteful. Such a brute would not, in other words, deserve his victory. Thrasymachus, it seems, looks up to something besides mere power. Although he claims to orient his life toward nothing but force and violence, such that belong as part of his believes in the greater good.

Might not Nietzsche be vulnerable to a similar - refutation? In his case, the tension arises from his reaction to the triumph of the weak over the strong in the slave revolt. From the theory sketched in the Genealogy, there is no basis for opposition to their victory. As it was for Thrasymachus, the very act of victory demonstrates that the triumphant party deserves to rule. One might even say that in the act of overpowering the strong, the weak effectively become the strong and thus by that very fact deserving of power.

Yet, Nietzsche reacts to the overthrow of the noble valuation with anything but equanimity. Not only are his works suffused with grand schemes to bring about a rebirth of a brutal aristocratic order in the modern period, but Safranski helpfully notes that, when it came to the public policy debates of his day, Nietzsche invariably sided against the vulnerable. He rejected "shortening the length of the workday from twelve hours a day to eleven in Basel." He was "a proponent of child labour, noting with approval that Basel permitted children over the age of twelve to work up to eleven hours a day." He opposed the education of workers and thought that the only consideration in their treatment should be whether (in Nietzsche's words) their "descendants also work well for our descendants." Nietzsche was a consistent partisan of the strong against the weak in every aspect of life.

The reason Nietzsche took such a brutal position becomes apparent in a passage of Twilight of the Idols (1888) in which he rails against the French Revolution and Jean-Jacques Rousseau's defence of the average person: What I hate [about the French Revolution] is its Rousseauean morality that in the so-called dominion of ‘truth’ there is within the Revolution under which it still works and attracts everything shallow and mediocre. The doctrine of equality. There is no more poisonous poison anywhere: For it may be preached by justice it, whereas it really is the end of justice. "Equal to the equal, unequal to the unequal"- that would be the true slogan of justice - and its corollary: "Never make equal what is unequal."

What is astonishing about this passage is not so much what it says about justice; Virtually every political philosopher in Western history would have agreed that justice demands "equal to the equal, unequal to the unequal." What is remarkable about the statement is that Nietzsche endorses its truth and resolves on its basis that human equality is fundamentally contrary to justice. One cannot help but conclude that Nietzsche - the man who gleefully proclaimed in a book titled Beyond Good and Evil that it was his goal to "sail right over morality"-was a perverse kind of moralist concerned above all about the injustice of shallowness and mediocrity. It is even possible to speculate that Nietzsche's visceral hostility to democracy, compassion, peace, equal human dignity, and perhaps even God Him, may have been motivated by a love for a particularly one-sided, profoundly distorted vision of justice. (Our best guide to the half-hidden moral dimension of Nietzsche's thought is Peter Berkowitz's masterful study, Nietzsche: The Ethics of an immortalist [1995)

At the very least, despite Nietzsche’s obviously Nietzsche's incessant denial of any possible foundation for the distinguished appreciation in the order of events, he could not help but presuppose that such a righteous existence that the rise of social and political equality has violated. The presence of a similar psychological dynamic in Thrasymachus and several of Socrates' other interlocutors eventually led Plato to conclude that the Idea of the Good exceeds all things-even being it"in dignity and power." Aristotle likewise chose to begin the Nicomachean Ethics with the declaration that "every art and inquiry, and similarly every human action and deliberate choice, . . . aims at some good." Of course neither philosopher meant that every human action nor idea truly is good; indeed, philosophizing consists in ascending from wrong opinions about the good to knowledge of what it truly is. However, they did mean to suggest that, even when we choose or contemplate evil, we do so at least in part because, somewhere in our souls, we mistake it for the good. For the ancient philosophers, love of the good is coeval with the human condition.

For such a statement, as for so many others, Nietzsche would have nothing but contempt. No doubt he would describe it yet another example of unwarranted Socratic "optimism." Perhaps it is. Nothing in the texts of the philosophers can prove that the good as they conceived it truly exists-that it is not merely a beautiful illusion we project onto the void. Yet there it is, there it has always been, and there it will remain-our lodestar and magnetic north, determining the shape of human reflection even among those who devote their lives to cutting themselves off from it.

Psychoanalysis, is the name applied to a specific method of investigating unconscious mental processes and to a form of psychotherapy. The term refers, as well, to the systematic structure of psychoanalytic theory, which is based on the relation of conscious and unconscious psychological processes.

In 1909 pioneers of the growing psychoanalytic movement assembled at Clark University in Worcester, Massachusetts, to hear lectures by Sigmund Freud, the founder of psychoanalysis. The group included, A.A. Brill, Ernest Jones, Sandor Ferenczi, and bottom row, Freud, Clark University President C. Stanley Hall, and Swiss psychiatrist Carl G. Jung. Freud’s visit, the only one he made to the United States, broadened the influence and popularity of psychoanalysis.

In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. Freud recounts the early resistance to his ideas and later acceptance of his work. Freud’s speech is slurred because he was suffering from cancer of the jaw. He died the following year.

The technique of psychoanalysis and much of the psychoanalytic theory based on its application was developed by Sigmund Freud. His work concerning the structure and the functioning of the human mind had far-reaching significance, both practically and scientifically, and it continues to influence contemporary thought.

Freud, the founder of psychoanalysis, compared the human mind with an iceberg. The tip above the water represents consciousness, and the vast region below the surface symbolizes the unconscious mind. Of Freud’s three basic personality structures - id, ego, and superego - only the id is totally unconscious.

The first of Freud's innovations was his recognition of unconscious psychiatric processes that follow laws different from those that govern conscious experience. Under the influence of the unconscious, thoughts and feelings that belong together may be shifted or displaced out of context; two disparate ideas or images may be condensed into one; Thoughts may be dramatized in the form of images rather than expressed as abstract concepts. Certain objects may be represented symbolically by images of other objects, although the resemblance between the symbol and the original object may be vague or farfetched. The laws of logic, indispensable for conscious thinking, do not apply to these unconscious mental productions.

Recognition of these modes of operation in unconscious mental processes made possibly the understanding of such previously incomprehensible psychological phenomena as dreaming. Through analysis of unconscious processes, Freud saw dreams as serving to protect sleep against disturbing impulses arising from within and related to early life experiences. Thus, unacceptable impulses and thoughts, called the latent dream content, are transformed into a conscious, although no longer immediately comprehensible, experience called the manifest dream. Knowledge of these unconscious mechanisms permits the analyst to reverse the so-called dream work, that is, the process by which the latent dream is transformed into the manifest dream, and through dream interpretation, to recognize its underlying meaning.

A basic assumption of Freudian theory is that the unconscious conflicts involve instinctual impulses, or drives, that originate in childhood. As these unconscious conflicts are recognized by the patient through analysis, his or her adult mind can find solutions that were unattainable to the immature mind of the child. This depiction of the role of instinctual drives in human life is a unique feature of Freudian theory.

According to Freud's doctrine of infantile sexuality, adult sexuality is a product of a complex process of development, beginning in childhood, involving a variety of body functions or areas (oral, anal, and genital zones), and corresponding to various stages in the relation of the child to adults, especially to parents. Of crucial importance is the so-called Oedipal period, occurring at about four to six years of age, because at this stage of development the child for the first time becomes capable of an emotional attachment to the parent of the opposite sex that is similar to the adult's relationship to a mate; The child simultaneously reacts as a rival to the parent of the same sex. Physical immaturity dooms the child's desires to frustration and his or her first step toward adulthood to failure. Intellectual immaturity further complicates the situation because it makes children afraid of their own fantasies.

The conflicts occurring in the earlier developmental stages are no less significant as a formative influence, because these problems represent the earliest prototypes of such basic human situations as dependency on others and relationship to authority. Also, basic in moulding the personality of the individual is the behaviour of the parents toward the child during these stages of development. The fact that the child reacts, not only to objective reality, but also to fantasy distortions of reality, however, greatly complicates even the best-intentioned educational efforts.

The effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.

The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies Triebe, which literally means “drives,” but which is often inaccurately translated as “instincts” to indicate their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.

How the conditions for satisfaction are to be brought about is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. In order to fulfill its function of adaptation, or reality testing, the ego must be capable of enforcing the postponement of satisfaction of the instinctual impulses originating in the id. To defend it against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness, its elevating projection, the process of ascribing to others one's own unacknowledged desires, whereby is the result to act. Reaction formation, the establishments of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.

An id impulse becomes unacceptable, not only as a result of a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. The totality of these demands and prohibitions constitutes the major content of the third system, the superego, the function of which is to control the ego in accordance with the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can give rise to feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.

A cornerstone of modern psychoanalytic theory and practice is the concept of anxiety, which institutes appropriate mechanisms of defence against certain danger situations. These danger situations, as described by Freud, are the fear of abandonment by or the loss of the loved one (the object), the risk of losing the object's love, the danger of retaliation and punishment, and, finally, the hazard of reproach by the superego. Thus, symptom formation, character and impulse disorders, and perversions, as well as sublimations, represent compromise formations-different forms of an adaptive integration that the ego tries to achieve through more or less successfully reconciling the different conflicting forces in the mind.

So we are faced with a choice. We can follow Nietzsche in refusing to take our philosophical bearings from prephilosophical intimations of the good. Or we can place our trust in those intimations, allowing the good reflected in common opinion and experience to serve as an indication-however tentative, ambiguous, or elusive-of what is likely to be true. Attempt to break from the good or accept that, in the end, it is the only orientation we have: those are the options. After a very long century of delusional and bloody experiments against the good, we do not lack for reasons to turn our backs on Nietzsche's truth.

Friedrich Nietzsche (1844-1900) is a writer whom professional philosophers have often discounted because he is too literary, and whom professors of literature have passed over because he is too much of an abstract thinker. Nietzsche's work, in other words, defies the usual academic division of labour. Yet, Nietzsche has played a prominent role in Western thought. He was one of the most brilliant and profound forerunners of such movements as Psychoanalysis and Existentialism, and a radical critic of Western philosophy and culture. His observations and ideas inspired scores of twentieth centuries intellectuals - including those who misconstrued his work as a proto-fascist doctrine.

Nietzsche explicitly refused to develop a philosophical system, suggesting that individual, seemingly disconnected analyses, expressed in short, well-written aphorisms, are more honest and insightful than lengthy, scholarly treatises that tend to bend everything to fit a comprehensive theory. Thus, his writings may sometimes be - contradictory. The way to read Nietzsche is not to figure out how the many things he wrote can be fitted into one abstract formula, a procedure that would be more appropriate for such philosophers as Plato or Kant, but to consider every one of his pieces as a thought experiment that succeeds or fails on its own.

The Victorian conventionalism and complacency of Nietzsche's cultural environment made any success during his relatively short lifetime impossible. Nietzsche even had to pay for the publication of some of his books. He did not become truly famous until the time when the reigning pretenses of European culture were headed for their massive breakdown at the time of World War I. Not until the mechanized brutality of the "Great War" had shattered the vain image that Europeans had of themselves, as stalwarts of some advanced civilization of their own doing, that practised readers begin to gauge the seriousness of Nietzsche's critical analysis of the Western mind. Because of his precocious facility with edifying speech, he was nick-named "the little pastor.” As an adolescent he attended Pforta, one of Germany's elite schools, where he received a solid classical education. His subsequent university training was in classical languages and ancient culture, and he became a professor of Greek language at the exceptionally young age of twenty-four. For about ten years he taught Greek at the University of Basel in Switzerland, during which time he developed a profound admiration for and friendship with the composer Richard Wagner (a friendship that in later years turned into passionate enmity).

Around 1879 Nietzsche became chronically ill, and he retired from teaching on a moderate pension. During the following ten years he wrote in rapid succession all the books that were to make him posthumously famous -Human, All Too Human, and Thus Spoke Zarathustra, and, The Joyful or Gay Science, and The Case of Wagner, including, Beyond Good and Evil and The Antichrist, and Twilight of the Idols and more. During most of this time he was physically in miserable condition. He had no permanent residence, preferring to take up temporary lodgings in various places in the Swiss Alps or on the Mediterranean coast. He grew increasingly critical, and even contemptuous, of Germany - at a time when Germany tried to rival such world powers as England and France by way of aggressive military and industrial expansion.

Because of his near-blindness his doctors advised him to abstain from reading, but he kept reading and writing at a furious pace as best as he could. He fought his insomnia with opiates and Veronal, drugs that upset his delicate stomach. He frequently suffered from migraine headaches that prompted him to experiment with further drugs. He endured, partly by choice, a loneliness that included both social isolation and a general misunderstanding of his philosophical ideas even among friends. At the beginning of 1889 he suffered a major collapse that resulted in permanently insanity-possibly the consequence of untreated syphilis. His sister, as his guardian during the last years of his life, and as his - appointed literary executor, seems to have destroyed and falsified part of Nietzsche's unpublished writings, by that furthering the dubious interpretation of her brother's work that made the philosopher look like a forerunner of Nazism.

The predominant view in Western philosophy that human beings have a twofold nature - a nature composed of a mind and a body - and that there is a constant struggle between the two components, a struggle that ideally results in the dominance of the mind over the body. It is this dualistic view of human nature that Nietzsche combats throughout his philosophy; he calls this dualism "childish." The mature view, according to him, consists in recognizing that mind and body are one, and that what is called the mind or the soul is nothing but one aspect of the basically physical nature of human beings - one of the many organs that the body needs to survive. Which is thus under the overall control of the physical organism as a whole? In the chapter called "Of the Despisers of the Body" in Thus Spoke Zarathustra Nietzsche writes: “In am body and soul”-that is what a child would say. Why shouldn't one talk like a child? Still, the adult, the knowledgeable person, says: “In am body thoroughly, and nothing beside it. Soul is nothing but a word for something belonging to the body.”

The body is one great reason, a variety with one sense, a war and a peace, a herd and a herder. A tool of your body is also your little reason, my brother, which you call “spirit”- a little tool and toy of your great reason.

In your body he resides; He is your body. There is more reason in your body than in, and who knows to what end your body needs your best wisdom?

The body, in other words, is not the external tool of an inner sovereign mental ego, but an organism within which the ego, or mind, plays a merely subordinate role. To think that the mind is, or can even be, in control of the body is one of the most preposterous illusions that Western civilization has produced, according to Nietzsche, and one of the most damaging as well. It is one of the crucial assumptions that would have to be overcome in a future and more healthy civilization.

By saying that the true is the body, Nietzsche does, of course, not deny that people have feelings, inner experiences, and ideas, or that they can be very intelligent or thoughtful. He also does not deny that people can overcome such things as physical cowardice, laziness, or fatigue by an exertion of their wills, or that they can achieve impressive feats even if their physical condition happens to be an obstacle more than a help. Such - mastery is, indeed, one of the most fruitful manifestations of what Nietzsche elsewhere calls “the will to power.” Nevertheless, what superficially looks like a mind operating on its own, or like a victory of the mind over the body, is ultimately nothing but a demonstration of the power of the body as a whole-the temporary strength of one part of the organism over another part. (The body is, after all, a complex, multi-faceted organisms, a herd and a herder, a war and a peace.) For if one asks for the ultimate source of such things as will power, determination, or whatever else goes into the cause of extraordinary achievements, one will have to explore those aspects of a person that are sometimes called the unconscious-aspects that are intricately connected with the physiological and neurological functions of the organism. Will power, keen intelligence, or any other mental phenomenon is not the emanation of some nonphysical entity "inside" the body, but, the expressions of a dynamic and multifaceted physical being.

Nietzsche had been brought up within a Christian tradition according to which the body was something bases, filthy, or evil, and in many theological analyses the very centre of depravity and sin. Throughout his adult years Nietzsche was in revolt against this tradition, and the reconstitution of the body as something wonderful and as a source of great achievements can be described as one of the principals aims of Nietzsche's entire philosophy. Therefore Nietzsche eagerly embraced much of the scientific materialism that developed during the 19th century. During the previous two centuries scientific progress had primarily been made in the area of physics, the science of inanimate bodies. The 19th century, by contrast, was the period of rapid advances in chemistry and biology. Darwin's publication of The Origin of Species (1859) and The Descent of Man (1871) was only one of the significant scientific developments that took place during Nietzsche's life time, although it turned out to be a particularly spectacular and controversial one.

Among the reading public philosophical materialism became something like a popular movement that at times found expressions that were rather pithy and polemical. Robert Buchner, for example, submitted that the brain produces thoughts in the way kidney’s produce urine, and he coined the famous ditty "Man is what he eats" (which in the original German is a pun: "Der Mensch ist was er isst"). Nietzsche's materialism was generally far more sophisticated than that, and he were also rather critical of Darwin. His thinking, however, fit into and was part of a broad trend that characterized much of 19th century culture. Impressed by what modern biologists and physiologists kept in the finding account that out and about are the intricate workings of the body, Nietzsche observed:

Whoever has even an idea of the body-of its many simultaneously working systems, of its many cooperative and conflicting activities, of the delicacy of its balances, etc.-will judges that all consciousness is, by comparison, something poor and narrow; he will judge that no mind will even remotely be adequate for that what the mind would have to do here, and perhaps that the wisest teacher of morality and legislator would have to feel clumsy and amateurish in the midst of this turmoil of war and duties and rights. How little becomes conscious to us? How often does this little lead to error and confusion? Consciousness is a tool, after all, and considering how much and what great things are accomplished without it one cannot call it the most necessary or the most admirable tools. On the contrary, there is, perhaps, no organ that is so poorly developed, or one that works with so many flaws. It is just the youngest organ, still in its infancy - let's pardon its childish pranks (To these pranks belong, among many other things, our morality, the sum of all past value judgments about the actions and attitudes of humanity.)

The discovery of the body that took place during the 19th century scandalized many conservatives, and it offended the moral sensibilities of what then was still the cultural mainstream. In 1857, for example, two of the most important literary works of that century were published in Paris: Charles Baudelaire's collection of poems called The Flowers of Evil, and Gustave Flaubert's novel Madame Bovary. Both books were immediately banned by the French courts because of their alleged "indecency," and outside France most publishers would not even think about publishing such material. Baudelaire's poems were considered offensive because they too were frequently dwelling on the pleasures of the flesh, and Flaubert outraged his critics by describing in some detail the pleasant feelings of a woman's orgasm. Much of the official public was simply not ready to acknowledge the reality and importance of the physical aspects of human existence openly; the definition of the human as mind or spirit still prevented people from acknowledging such things as the pervasive power of sexuality or the determining force of physical conditions in human history. Yet, for a significant minority the discovery of the richness of the physical universe, and of the human body in particular, was both revelation and liberation. Walt Whitman's "In Sing the Body Electric" (published by him in 1855 in the first edition of Leaves of Grass) testifies to this new enthusiasm about the physical nature of human beings. Like Nietzsche, Whitman postulates the basic identity of body and soul: “Sing the body electric, as this is the armies of those I love ungirth me, and I ungirth them.” If those who defile the living are as bad as they who defile the dead? If the body does not aspire as fully as that of the soul? If the body were not the soul, what is the soul

To conceive of the body, and not the rational mind, as the true is part of a change in perspective that has far-reaching implications. One implication for Nietzsche was a deep appreciation of the many non-rational faculties that emanate from or are connected with the drives and passions of the body, and the darker and more unconscious regions of the soul. In his first major work, The Birth of Tragedy (1871), Nietzsche developed a theory of art that highlights the importance of visionary dreams and inspiring intoxication, while debunking the role of reason and rational calculation in the creative process. (A presentation of Nietzsche's theory of art, including his discussion of Apollinian dream visions and Dionysian intoxication. In his later works Nietzsche continues to emphasize the power and fruitfulness of all the faculties connected with the physical nature of human beings, and he continues to expose the allegedly delusional character of - conceptions that are based on the idea of a disembodied mind.

By insisting that the mental or spiritual can ultimately not be separated from physical matter, Nietzsche rejected the metaphysical thinking that had dominated most of the traditional philosophies, until then. The best-known division of reality into a physical and a nonphysical realm is, of course, Plato's separation of the imperfect and changing world of the senses from the timeless and perfect world of ideas (or “forms”). With this separation Plato provided the basic model of a twofold reality that subsequently spawned several variations of it in Western thought. The most popular of these variations is the metaphysical system of Christian Theology, which Nietzsche dubbed "Platonism for the people," with its sharp division of reality into the temporal world here and now and an eternal hereafter. Still, later variations of the same basic model were the philosophical systems of Descartes, Kant, and many of subordinate Idealist thinkers. What most of these dualistic conceptions of reality has in common is the additional notion that the physical world is inherently inferior to the spiritual world, and that therefore enlightened individuals will not attach their allegiance to this less valuable part of reality, to the deficient and corrupting world of the body and the senses. Ever since Socrates and Plato, according to Nietzsche, the West has been on the road of degeneracy because of this misguided devaluation of matter and its corresponding over-valuation of a seemingly supernatural spirit or mind. For Nietzsche this wrongheaded valuation of things amounts to nothing less than a wholesale betrayal of the earth - with all the consequences that such a betrayal of the natural cosmos implies.

One reason that people devalue the physical world, according to Nietzsche, is their fear of life-of life’s innumerable uncertainties, sufferings, and its inescapable finality. It is because of these deep-seated anxieties that people seek refuge in an ideal and imaginary world where they seem to find everlasting peace and relief from all the ailments that besiege them on earth. People do this naivety, by imagining "another world" in which people somehow continue to exist in the way they do in this world, only more perfectly, or they do it in more sophisticated ways, the way’s philosophers like Plato or other teacher of a spiritual life recommend. Nevertheless, in whatever way people try to escape the imperfections and ailments of the physical world, their retreat is always a manifestation of weakness, an inability to face reality in the way strong individuals would. Stronger persons would not only take suffering and other adversities in a stride, they would in a sense even welcome them as inevitable aspects of the very nature of life. As there is no life without death (eventual death being part of the very definition of what it is to be alive), there is also no experience of health without sickness, no enjoyment of wealth without poverty, and no appreciation of happiness without a real knowledge of pain. “Live dangerously” is one of Nietzsche’s well-known pieces of advice. It is his reminder that the most exuberant and ecstatic experiences of life do not grow out of a well-protected existence where risks and extremes are anxiously kept at bay, but out of a courageous exposure to the forces and conditions of life that begins the best of a person’s powers. A good horseback rider will not beat a spirited horse into submission to have an easy ride, but rather learn how to handle a difficult mount. Similarly, a strong and healthy person will not shun the dark and often dangerous sides of the world by retreating to some metaphysical realm of comfortable peace, but rather embrace life in its totality, its hardships and terrors and its splendours and joys.

It is, incidentally, for this that one has to read Nietzsche’s notorious reflections on “master” and “slave” moralities in his Beyond Good and Evil. As a species, according to Nietzsche, human beings will naturally tend to cultivate either of two moralities. “Master moralities” are developed and embraced by naturally strong and - confident people. They value most highly such things as strength, intelligence, courage, strife, and an inclination to rule over things and other people. Pride for such people is not a sin. They generally despise traits like meekness, timidity, simple-mindedness, and fear. In their eyes humble people are “bad.”

“Slave moralities” are developed by just such weak or timid people. They tend to flourish among downtrodden populations. “Slave, and moralities” value most highly such things as sympathy, pity, kindness, humility, patience,-effacement, and charity. The worst features in their estimate are aggressiveness and being dangerous to others. People who embody such aggressiveness are shunned or denounced as “evil’ (as opposed too “bad”).

Nietzsche’s prime example for a “master morality” is the ethos of Pre-Socratic Greece - embodied in the attitudes and deeds of those tribal heroes that Homer described in the Iliad and the Odyssey. Nietzsche’s prime example for a “slave morality” is the ethical teachings of Christianity. Although Nietzsche claims that, in analysing these two kinds of morality, he does nothing more than describe impartially certain psychological and anthropological facts, he clearly considers only variations of the “master morality” as suitable designs for a future with any hope. Only individuals who feel at ease among strong and daring people would be ready to face the darkness and dangers of the real world with confidence and an enterprising spirit. Only they could live without comforting metaphysical myths and imaginary hopes. They would intensively live their lives here and now, cheerfully or otherwise, and be content with being gone once their chosen tasks are accomplished.

Although Nietzsche thought of all metaphysical systems as so many forms of illusion, he was not blind to the great importance that these systems have had for the shaping of Western civilization. In a sense he saw them as necessary illusions, illusions that indirectly taught people - discipline and propelled them forward to heroic undertakings and significant accomplishments. Nietzsche was keenly aware of how much in Western civilization depended on the beliefs and attitudes that Christianity had imposed on people in the course of many centuries, and in his own way he took the modern decline of Christianity as a cultural organizing force much more seriously than most ordinary Christians.

Nietzsche discusses the cultural significance of Christianity in connection with his often quoted remark "God is dead.” By coining this phrase Nietzsche did, of course, not make any statement about the existence or nonexistence of God. What he offered, rather, is an observation concerning the idea of the deity, and the idea’s crucial role as a foundation of the general culture. In a nutshell Nietzsche’s reasoning was this: In a universe conceived in strictly scientific terms God has no intelligible place anymore, no meaningful role in the explanation of the workings of the world. In a culture that depends as much on sober scientific research and thinking as ours, talk about God has become peculiarly vacuous and oddly inappropriate.

Ancient Greeks thought of the awesome power of thunderstorms in terms of Zeus and his greatly feared thunderbolts. People familiarly with the theory and various manifestations of electricity, by contrast, will hardly have any other than a poetic use of the Olympian god and his bolts; as an explanation of natural phenomena Zeus has been rendered irrelevant by the discoveries of science. That, in the context of modern technological civilization, has happened to all deities in all traditional cultures. People who think in scientific terms do not refer to divine powers when exploring or discussing earthquakes, volcanoes, draughts, or the atomic bomb. Some scientists may continue to talk about God, but there is no real opportunity anymore to demonstrate any provable effects of a divine existence or power. Where people used to assume heaven, they now measure intergalactic space; where once they experienced the wrath of God, they now pinpoint viruses that spread in populations without immunity. Mention of God in laboratory reports or professional conferences would dumbfound the scientific community.

The very concept of God becomes difficult to grasp when people are used to the discipline of logic, and when the furnishing of evidence in support of important contentions has become standard practice in everyday life. What kind of being could God possibly be? How could one recognize God if one encountered him (or her) or heard "his" voice? Can we have any trust at all in our hopelessly anthropomorphic notions of God? How exactly is a noticeable Supremacy of Gods being is different from a God that does not exist? Is there anything left of our belief in God except dubious talk and vague desires?

Because of such difficulties and uncertainties, God has become less and less of a palpable factor in modern life; the scientific-technological world has grown used to functioning without any theological basis. Today science alone provides the decisive standards of what is true and what works. Whenever there is a conflict between science and religious doctrine, science will not accommodate religion anymore, but religion will adjust it to scientific conclusions. It is this cultural situation that prompted Nietzsche to talk about the “death” of God.

Nietzsche did not present the statement “God is dead” as his own, but rather as that of a “madman” whom he describes in a sort of parable in The Joyful Science. This madman, talking to an unsympathetic crowd in the marketplace, raises some noteworthy questions concerning God’s death: Where has God gone? I will tell you. We have killed him-you and me. We are all his murderers! Nevertheless, how did we manage to do so? How were we able to drink up the ocean? Who gave us the sponge with which we wiped away the horizon? What did we do when we loosened the earth from its sun? Where is it headed now? Where are we headed? Away from all suns? Aren't we in a free fall? Disappearing backward, sideways, forward-in all directions? Is there still an above and/or below? Are we not stumbling as through an infinite nothing? Isn't empty space breathing on us? Didn't it get colder? Isn't night coming on all the time, and more of the night? -God is dead! God remains dead! We have killed him! How shall we console ourselves-the most murderous of murderers? The holiest and the mightiest that the world has ever had have bled to death under our knives. Is not the magnitude of this deed too great for us? Shall we not have to become gods ourselves to seem worthy of it?

The madman in Nietzsche's story is not mad because he talks nonsense, for his speech, when looked at closely, makes a good deal of sense. The speaker only appears crazy because he is excited about something the crowd has not yet become aware of-because he is too far ahead of his time. The fact that "God is dead" in it is no news to the crowd; many of them have been faithless for some time. What is news to them, is that it is they who have killed God, that it was their own doing (by developing a modern civilization of scientific thought and sophisticated technology) that has led to the demise of the Supreme Being in their world? What the crowd also fails to realize is the enormity of the consequences that are bound to follow from their deed. For so far most people have continued living as if nothing had happened, as if the world in which God’s authority had once been supreme were still intact. Nevertheless, that the stability of a well-ordered and comfortable world, as the madman insists, does not exist anymore. Unnoticed by the crowd, the world as a whole has become a dark, cold, and frighteningly confusing place:

Mention of the “wiping out of the horizon” is a reminder that the comfortable narrowness of traditional views of the world has irremediably vanished: Everything has opened up to infinities that render the familiar world utterly strange. In a narrow world person can find their bearings; in an infinite universe people will feel at a loss. A comforting conception of the universe where everybody and everything have its proper function and place -a universe designed and ruled over by God-is not tenably any more in the light of advanced modern knowledge. Science has increasingly depicted the universe as a puzzling riddle, not as a place that we know, and where we can feel comfortably at home.

The madman’s talk of “the earth loosened from its sun” indicates humanity’s loss of a centre-of a God and divine order that could give orientation and meaning to human lives and endeavours. That the earth is in “free fall” implies that humanity has lost all control over its destiny, and that no new “suns” are in sight. There is no “above” and “below” anymore: Everything has become equally important or unimportant, equally valuable or valueless. Solid orientation has become impossible where there are no absolutes and firm guide posts. Anyone who cares to think honestly about the modern condition is bound to uncover a measure of nothing and prevail upon a pervasion of foolish senselessness, mixed by means of over-flowing emptiness.

“God remains dead,” the madman contends. The frightening vision of the modern world may prompt many to go back to the past, to escape the modern “wasteland” by seeking refuge in old cosmologies and faiths. Still, there is no plausible going back. Once the rational and critical thinking, which is the basis of science, technology, and our actual survival, has taken hold of a culture, people cannot simply become childlike believers again. Once scientific skepticism, reliance on solid evidence, and precise analytic thinking have become an integral and necessary part of a society’s life and survival, returning it to any naive faith without incurring the reproach of intellectual dishonesty or lack of integrity is impossible. Once God has been “murdered” there is nothing left but to acknowledge the great darkness and to move forward under radically new conditions.

One particularly prominent aspect of the general loss of orientation and meaning invoked by the madman is the felt absence of absolute standards and values. If there is no list of moral principles or rules like the Ten Commandments, and if there is no divine authority to back them up, all people are left with being a number of competing moralities-and no impartial criteria by which they could tell which of these competing systems might be valid or best. People would find themselves in a situation of complete moral relativism, a relativism that may easily and logically lead to a denial of morality together, to total moral nihilism. “If God did not exist, everything would be permitted,” we can read in Dostoyevsky’s the Brothers Karamazov, and that is how Nietzsche’s madman sees the matter as well. “Are there still an above and a below?” he asks, and the answer is, of course, that without a God and a divine order of the world there is not. To help others in need and to share one’s wealth may be a high priority for some, but for others such a principle may be of little importance-or even reprehensible. Without the absolute authority of God there is no telling who is right and who is wrong. Killing for political ends, abortion, eating meat, adultery, censorship, capital punishment, pre-emptive war—dozens of principles and practices are accepted or rejected upon the basis by the nonentity grounded in trustworthy regional traditions, entrenched authorities, unexamined habit, or just “how people feel” at anyone time. Without God there are only the multitude of cultural prejudices and personal bias—void of any authoritative validation.

Since science was instrumental in the “murdering” of God, some theoreticians were inclined to think that science can also help to create a new value system, a system that would have both the authority and assumed impartiality of the God of the past. Nothing came of this idea, however. On the contrary, the reigning consensus between scientists and most philosophers of science has been that a thoroughly scientific view of the world is inherently amoral. For the sciences make it their business to recognize only facts, and facts in themselves, according to that consensus, are neither good nor bad. All facts or state of affairs is equally valuable or valueless, and science, for this reason, has to remain value-neutral. From a strictly scientific point of view one could not say whether helping other people is better, to leave them separate, or even to exploit recklessly or “liquidate” them. The scientific investigation of any conceivable course of action would produce just so many more facts, but absolutely no value conclusion. Scientists can only say what is, not what ought to be; Science implies a “fact-value gap” as part of its methodology; Facts by themselves can offer no moral guidance. Science, in other words, did not only fail to establish a new value system, but vigorously reinforced the moral disorientation of modernity by emphasizing its principled incompetence with regard to matters of ethics.

The proclaimed value-neutrality of the sciences is an integral part of the grim scenario painted by the madman. Remembering the scruples that some of the Manhattan Project physicists had when they wondered whether they should unleash the ominous powers that went into the atomic bomb, one could say that the proclaimed value-neutrality of the sciences is just the sort of thing that makes the scenario of modernity described by the madman so grim. For once genies like nuclear fission or fusion are out of the bottle, without a solid moral framework in place within which such powers could be managed, it is no mad exaggeration to speak of the earth or humanity as in some sort of free fall.

The people in the marketplace do not see any of this. They all have their personal concerns and short-term goals, and they routinely go about their mundane businesses, including the business of making every day, moral decisions. It is only the "madman" who sees the ultimate implications of the death of God, and who is alarmed by the great moral and existential void in which they all live. "Europe has yet to face the reality of Nihilism," Nietzsche once remarked. The entirety of Western civilization still functions within a mind-set that thousands of years of theistic training and practice have created. At the time of his writing Nietzsche thought that it may yet take some two hundred years until the truth of their situation would dawn on the majority of people. Accordingly the madman concludes his lament with the words: In come too early. In am not yet at the right time. This enormous event [the death of God] is still on its way; it is travelling. It has not yet reached the ears of the crowd. Lightning and thunder needs time, the light of the stars needs time, deeds need time-even after they are done-to be seen and heard. This deed is as yet farther from them than the farthest stars-and yet they have done it!

It was not until the 20th century that philosophers began to reflect systematically on the situation outlined by Nietzsche’s madman. Jean-Paul Sartre and other Existentialists understood themselves to be thinkers who have finally fully realized the implications of the death of God (which is one reason that they considered Nietzsche as one of their most important forerunners). Sartre, in his essay “Existentialism is a Humanism” of 1946, quotes, with approval, Dostoyevsky’s contention that everything would be permitted if God did not exist. Sartre derides the traditional secular humanists for thinking that the absence of God is not much of a problem for ethics. “Nothing will be changed if God does not exist,” he describes these humanists as saying. “We will rediscover the same norms of honesty, progress and humanity, and we will have disposed of God as an out-of-date hypothesis that will die away quietly of it.” Existentialist humanists see things quite differently.

Existentialist, by contrast, finds it extremely embarrassing that God does not exist, for there disappears with him all possibility of finding values in an intelligible heaven. There can no longer be any good deductivity, since there is no infinite and perfect consciousness to think it. It is nowhere written that “the good” exists, that one must be honest or must not lie, since we are now on a plane where there are only men.

Existentialists, in other words, take very seriously what Nietzsche’s madman says, and their description of the human condition as one without any preordained moral system or orientation, without, indeed, any authoritative way of making sense of the world and human life, is exactly the scenario that Nietzsche invokes in The Joyful Science. Existentialists explicitly define human existence as an undetermined being in a meaningless universe, and as an anguished freedom that has to create all values and purposes out of it. As Existentialists had witnessed such events as two ferocious world wars, the holocaust, the atomic incineration of whole cities, and the continuing death by malnutrition of millions of children every year (together with the worldwide productions of an entertainment industry that can plausibly be described as organized idiocy on a massive scale), the absence of any authoritative ethics or established moral framework had become a particularly urgent problem for them. It was in the existentialists’ famous expressions of absurdity, loss, abandonment, and despair that Nietzsche’s dark vision of things found its final manifestation.

The madman is to exclaim of the cognitional framework as it was layed down by Nietzsche, having to say, that of who had been killed, but not expected he was to say of an indirect attestation, that, his audiences were thereby the one’s that did not hear of that very same madman who lit a lantern in the bright morning hours, and ran to the marketplace and cried incessantly? "In seek God! In seek God"- As many of those who did not believe in God were standing around just then, but he provoked much laughter. Has he got lost? Asked one. Did he lose his way like a child? Asked another. Or is he hiding? Is he afraid of us? Has he gone on a voyage? Emigrated? - Thus they yelled and laughed. The madman jumped into their midst and pierced them with his eyes. "Where is God?" he cried. "I will tell you. We have killed him-you and me! All of us are his murderers! Yet how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Where is it moving now? Where are we moving? Away from all suns? Are we not plunging continually? Backward, sideward, forward, in all directions? Is there still any up or down? Are we not straying as through an infinite nothing? Do we not feel the breath of empty space? Has it not become colder? Is not night continually closing in on us? Do we not need to light lanterns in the morning? Do we not hear nothing as yet of the noise of the gravediggers who are burying God? Do we smell nothing as yet of the divine decomposition? -Gods, too, decompose! God is dead God remains dead! We have killed him! How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives, - who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it? There has never been a greater deed-and whoever is born after us, for the sake of this deed he will belong to a higher history than all history hitherto"- Here the madman fell silent and looked again at his listeners: they, too, were silent and stared at him in astonishment. At last he threw his lantern to the ground, and it broke into pieces and went out. In have come too early, he said then; My time is not yet. This tremendous event is still on its way, but wandering - it has not yet reached the ears of men. Lightning and thunder requires time; The light of the stars requires time; The deed though done, still require time to be seen and heard. This deed is still more distant from them than the most-distant stars-and yet they have done it themselves"-It has been related further that on the same day the madman forced his way into several churches and there struck up his requiem aeternam deo. Led out and called to account, he is said always to have replied nothing but: "What after all are these churches now if they are not the tombs and sepulchers of God?"

Friedrich Nietzsche's vehement attacks upon Christianity, encapsulated in his famous dictum that "God is dead," pose a problem for the reader who agrees with Nietzsche and yet does not wish to give up a certain basic Christian belief. However, careful analyses of both Nietzsche and the synoptic Gospels (Matthew, Mark and Luke) reveal an interesting pattern: the elements that Nietzsche opposes do not appear in the teachings of Jesus at this point, but rather in John and the writings of the Church fathers. In the synoptic Gospels, the earliest extant writings we posses, Jesus and Nietzsche often parallel each other, teaching similar doctrines.

Jesus did not teach the will to death and the ascetic ideal, but rather a strong individualism compatible with Nietzsche's philosophy. If this is the case, God need not die, even if the Church preaches dogma that appears to make that necessary for the free spirit to liberate it from the yoke of the herd and its guilt. An extensively modified, but still religious, Christianity can complement and reinforce the Nietzschian world-view. Using the Gospels to find the true message is difficult, for they are evolving documents that have been modified by the Church more than two millennia. However, enough support can be found, even with the warping of the originals, to support the view that Jesus originally taught something very differently from the Christian religion as we know it.

The worst thing about Christian belief, according to Nietzsche, is that it encourages, indeed requires, what he terms afterworldliness: "a poor ignorant weariness that does not want to want anymore." Here the believer despises this life and this world in favour of some promised world, accessible only after death, which is the truly good one. Nietzsche contends that we should live in this world, and that a yearning for another world is symptomatic of an unhealthy hatred of life: "It was the sick and decaying who invented the heavenly realm." Thus, the afterlife is an artificial creation used by the unhealthy to justify their hatred of life. The healthy soul lives and rejoices in this world, no longer willing to "bury one's head in the sand of heavenly things, but [willing instead] to bear it freely, an earthly head, which creates meaning for the earth.

While Jesus does promise an afterlife, he never suggests that his followers should despise this life or be in a hurry to get elsewhere. Indeed, the parable of the talents clearly shows that we are supposed to make the best of this life and the abilities we are given: The servants who increased the money their master had given them were rewarded, while the servant who simply hid his money and waited for the return was punished. The lovers of death here clearly contradict the teachings of their supposed master, who teaches that life is a gift of God and is not to be wasted. More support for the dictum that we should not hurry toward death is in the Sermon on the Mount, where Jesus says "Do not be anxious about tomorrow, for tomorrow will be anxious for it. Let the day's own trouble be sufficient for the day." While this, comes at the end of a speech on not becoming attached to materiality. Thus, Nietzsche and Jesus are compatible in affirming this life and warning against concentrating on the next.

Nietzsche also criticizes the ascetic ideal for being opposed to life. Asceticism results from after worldliness; "Once the soul looked contemptuously upon the body, and then this contempt was the highest: She wanted the body meagre, ghastly, and starved. Thus, she hoped to escape it and the earth." Suicide is the goal, and asceticism the only form of suicide allowed by the Church. This is clearly antithetical to the love of life that Nietzsche claims as characteristic of the free spirit; Nietzsche sees this hatred of life, expressed through the ascetic ideal, as so entwined with Christian belief that only the death of God can eliminate its effects and allow man to love life. In other words, conventional Christian belief so thoroughly poisons the believer that only its extirpation can give him a chance to be free.

However, we have already seen that Jesus did not preach after worldliness; could asceticism be yet another apocryphal addition to his message? Jesus, we are told, went into the wilderness to fast for forty days, which is clearly an ascetic act. However, this does not mean that he subscribed to the ascetic ideal as it would later be defined. First of all, Nietzsche agrees that asceticism is favourable for the philosopher: "We have seen that implications of asceticism, which is to say a strict yet high-spirited continence, is among the necessary conditions of strenuous intellectual activity as well as one of its natural consequences." So Jesus was not seeking death but rather the optimum environment for thought and creativity before embarking upon his ministry, even as Nietzsche has Zarathustra do on more than one occasion. The need for a materially simple lifestyle to be creative also accounts for Jesus's repeated injunctions against worldly wealth; if one wishes to develop spiritually, one's energy must be directed in that direction, not the acquisition of material goods: "For going through the eye of a needle is easier for a camel than for a rich man to enter the Kingdom of God." There is another reason that Jesus would spend time in the wilderness, related to the time in which he lived. Two thousand years ago (and still today in some cultures), time spent alone in contemplation was considered a prerequisite for holiness and wisdom, a sort of credentialing. People at that time would not have taken Jesus seriously if he had not been out fasting; it is noteworthy that no gospel ever mentions him fasting again.

Not only was Jesus not an ascetic him, he did not encourage his followers to abuse their bodies. The closest Jesus comes to approving of fasting (the primary ascetic act in his time) is when he tells his followers to show no outward signs if they fast, for that makes a vain display out of what is supposed to be a mystic act. He was criticized for not making his disciples fast, but he answered "Can the wedding guests mourn as long as the bridegroom is with them?" Even his death was not to be a permanent reason for - abuse, for after having said the above, he stated after Easter that "I am with you always, to the close of the age." Thus, the true Christian has no excuse for fasting or other asceticism on religious grounds. The lack of asceticism in Jesus's teaching makes perfect sense once one accepts that he did not teach after worldliness.

When confronted with the idea that Jesus did not preach after worldliness, the conventional believer is likely to ask, "But what of the Kingdom of God?" Indeed, the Gospels are full of references to the Kingdom of God, but these are not necessarily (or, if one were to wish independently) references to life after death. The Kingdom of God is something that a person can achieve in this life: "For beholding, the Kingdom of God is within you." This concept can be better understood as a different mode of existence, of a person who is no longer like he was before, which corresponds to Nietzsche's idea of the overman. Like the overman, the Kingdom of God cannot be reached through the application of reason, intelligence, or wisdom: "Whoever does not receive the Kingdom of God like a child will not enter it." In Thus Spoke Zarathustra, the coming of the overman cannot be known, even by Zarathustra him, until it happens. Jesus says the same about the Kingdom of God, in that "Watch therefore, for you do not know on which day your Lord is coming." Entering the Kingdom of God, like becoming the overman, is a leap, not a gradual process that can be rationally understood; Once, again, Nietzsche and Jesus converge and coincide. Both the Kingdom of God and the overman are described in terms that make it absolutely clear that these states represent a transcending of ordinary humanity, a step beyond what we are capable of imagining today: Nietzsche says of the overman that "He is this lightning, he is [the] frenzy" while Jesus says "The kingdom of heaven is like leaven that a woman took and hid in three measures of flour, till it was all leavened." Although the imagery is different, both are describing a state of transformation, of great change, which is the object of life.

Jesus, like Nietzsche, had very little regard for priests and their rule. The gospels are full of the taunts and criticisms of the Pharisees, the priests of Judaism. Jesus and his disciples constantly violated the laws of the pharisees where it would be known. Jesus healed on the Sabbath, and when the Pharisees asked him why, he answered "The Sabbath was made for man, not man for the Sabbath" In other words, a law, or morality, is to be followed only as long as manning it is beneficial; this teaching is antithetical to the rules of any priestly caste. He rejected the priestly notion that external signs are indicative of inner health; After violating the Mosaic dietary laws, Jesus stated that "Not what goes into the mouth defiles a man, but what comes out of the mouth, this defiles a man." Jesus is preaching an independence from the law that constitutes the first step toward the Kingdom of God. This attitude is crucial: The Mosaic law was the foundation of the morality of the society Jesus moved in, and therefore by rejecting it he was rejecting the morality of his society. One of the central tenets of Nietzsche's philosophy is that the overman requires independence from the old morality, as the very title of Beyond Good and Evil confirms. Jesus and Nietzsche continue to walk the same path.

The two teachers also coincide in asserting that their teachings cannot be adopted by more than a few of those who hear them. Zarathustra finds that he must "speak not to the people but to companions," companions who like him have left the herd and are thus ready to hear what he has to say. One of the leitmotifs of Nietzsche's work is the crushing influence of the herd and therefore the necessity to reject it, as painfully as this may be, in order to develop. Similarly, although Jesus spoke to the masses, he was under no illusions as to their ability to hear him: in the parable of the wheat and the tares, only a very few of the seeds sowed bear any fruit. He only bothered explaining his parables to the apostles, his companions. Jesus also preaches the need to free one from the bonds of society, and warns of it hatred for those who do so, "Beware of men, for they will deliver you up to councils, and flog you in their synagogues. You will be dragged before governors and kings for my sake.” Nietzsche also warns of the wrath of the herd: Since I do not join their dances Tied to their old rope, In am followed by their glances, Sweetly poisoned envy without hope.

Both Nietzsche and Jesus realize that the man must separate him from the herd in order to live, but that the inevitable corollary of this act is that he will be despised, feared, and envied by those still within the herd.

One of Nietzsche's central tenets is that man is "that which must always overcome it." One must always survive the overcoming one, with no thought of a time when overcoming will no longer be necessary; as long as something is, there is always something to be overcome. Interestingly, there is a similar message in the teachings of Jesus, who exhorted his listeners to "Be perfect, even as your heavenly Father is perfect." Attempting to achieve perfection would be an identical process too - overcoming, when one considers Jesus's contempt for the mosaic law, his society's expression of morality. The believer who took Jesus's words to heart would have continually to reexamine him, change him, improve him without a firm guide. In other words, he would have continually to overcome him in the pursuit of perfection. Jesus and Nietzsche teach the same thing, although in different languages.

On the theme of -judgement, an even greater difference in method obscures a similarity in aim. Nietzsche proclaims the doctrine of the eternal recurrence, where we must believe that we will live our lives again and again, with no changes. Thinking about this force’s one to come to grips with what one really thinks about one's life; if one has accepted one's life, then the idea of repeating it is appealing, but if not, then it is terrifying. Jesus achieves the same goal by postulating judgement by an omnipotent being who can see through all one's lies, even the ones one tells one. Again, faced with a postulated eventuality, but one must take honest stock of how he has lived. In this case the difference in method stems from Nietzsche's rejection of and Jesus's acceptance of the idea of an afterlife; Their intentions are identical, to require their listeners to judge themselves far more harshly than they would ordinarily.

One crucial issue remains to be dealt with, that is humility. Humbleness appears again and again in the message of Jesus. "If anyone would be first, he must be last of all and servant of all," Jesus tells his disciples. Here Jesus and Nietzsche appear to be invariably at odds, since the last thing Nietzsche taught was humility. Yet the apparent divide is not as great as it would seem. One must always survive to overcome one, and to defy the herd requires a lot of pride; ought not that this childish, immature pride be the first thing to be overcome? Only with a harsh appraisal can one become in knowing one, and pride would prevent this. Thus pride must be overcome in order to know one and thus be wisely proud. The humility Jesus teaches need not be the grovelling - abasement the Churches have said it is. Could this humility not be the inevitable humility of one who has looked at him clearly, realistically, warts and all? This humility would lead not to weakness but to greater strength and better overcoming. Jesus did not intend for us to be weak, but to be strong and sure of ourselves; That is why he said to "turn to him [the other cheek] also," for he who is truly strong is in control of him and will respond, not on impulse, but at the proper time, under perfect control. This interpretation is compatible with Nietzsche's philosophy, but rather complements and expands it.

In sum, Jesus and Nietzsche do not have to be at war with one another, but can supplement and fulfill each other, if one only has the insight and originality to strip away the accretions that the lovers of death have placed upon the teachings of Jesus. Both preached the overcoming individual, independent of the herd, who strives to evolve in the hope of reaching a transcendent state within this world, even if that state cannot be reached by any means other than a leap. Both have been grossly misinterpreted, for their message is not one the herd is willing to tolerate, and both are in need of clear understanding.

What is that you ask? You say that In have left out the most important act of Jesus on this earth, the one that has given a religion its primary symbol? What of the Resurrection? Well, if one accepts that life does not end in death, then returning to this world after the event that separates us from whatever comes after for the love of one's companions, would be the ultimate act of will, of power, of striving, indeed it would be the act . . . of an overman.

In some respects the story of Friedrich Nietzsche's Zarathustra is an epos in the way the stories of Odysseus or Jesus or Don Quijote is. It describes a man with a distinct character, who faces an important task, who in the pursuit of this task has significant encounters with friends and adversaries, who experiences deep crises and changes of heart, and who in the end comes to a resolution that represents a meaningful possibility of human existence.

In contrast to most other epic poems, however, Thus Spoke Zarathustra is less a series of external adventures than a spiritual journey. The ratio of external events and inner developments is heavily weighted in favour of the latter. More than half of the entire text consists of Zarathustra's philosophical lectures and thoughts, although these thoughts are conveyed by archetypal myths and poetic language rather than analyses. The plot of Zarathustra's story is important, however. Zarathustra's philosophical pronouncements cannot be fully appreciated without being seen in the context of specific external events. To understand Thus Spoke Zarathustra one has to follow both the story's line of action and its line of thought.

In the Prologue the reader is told that at the age thirty, Zarathustra "left his home and the lake of his home and went into the mountains. Here he enjoyed his spirit and his solitude, and for ten years did not tire of it." After this time, however, Zarathustra decides to leave his mountain retreat to share his slowly accumulated wisdom with the rest of humanity. His goal is to proclaim the "overman," a type of human being that is to be as superior to today's human beings as today's humanity is to the higher apes. The state of modern humanity seems to Zarathustra to be such that a new guiding ideal is urgently called for - an invigorating inspiration that would give new energy and meaning to people who, tired and disillusioned, are mired in a cultural wasteland.

Much of the reigning spiritual malaise is due to what Zarathustra refers to as the "Death of God." Not that Zarathustra thought that God had ever existed, but he knew that once the idea of God was a most important inspiration without which most of Western (as well as much of Non-Western) culture would not have been created. As the Modern Age with its secularizing tendencies developed, however, the idea of an all-powerful God progressively lost its plausibility and organizing force, and by the time scientific rationality had become the dominant mode of thought, thinking that an anthropomorphic deity could be seemed hopelessly naïve and anachronistic something like a law giving lord of some orderly and meaningful world. The universe as described by modern science became too vast to be comprehended in its entirety at all, and for educated people it became increasingly difficult to find any valid basis for a genuine moral order, or for more than an arbitrary meaning of life. Nihilism had become a haunting problem for modern humanity, and it is this problem that Zarathustra's philosophy is meant to solve. The "overman" is Zarathustra's answer to the modern wasteland.

Once admixed with people Zarathustra does not lose any time to advocate his vision: Humanity as a whole is to overcome its present mediocrity and bankrupt civilization in order to create the overman: "Man is something that is to be overcome. What have you done of overcoming him?" The reception that Zarathustra's philosophy receives, however, is none too encouraging. First the crowd mistakes the new prophet as part of a circus act. Once the people understand what Zarathustra is up to, they let him know in no uncertain terms that they have absolutely no use for something like the overman, that what they are really interested in is a nice and comfortable life. "You can have the overman," they laugh. If life has no higher meaning, which is not something over which they will lose any sleep. Happiness in the form of pleasure is their highest gal."The greatest happiness for the greatest number of people," as the Utilitarians put it. (There is no philosophy to which Zarathustra's thought is more directly opposed than Utilitarianism. Nietzsche rarely talks about the "flathead" J. S. Mill, the principal theoretician of Utilitarianism, with anything but derision.)

From now on Zarathustra has nothing but contempt for the masses, although he is repeatedly tempted to pity and help them. His contempt extends not only to those social classes that have traditionally been excluded from the privilege of higher education, but also to all people who limit their lives and aspirations to the pursuit of trivia and convenience. That includes the majority of artists and writers, of students and professors, of journalists and politicians-the majority, that is, of what is sometimes called the "cultural elite." They all fall far short of seriously developing their personal or their human potential. Instead Zarathustra starts looking for a few outstanding individuals, persons who are genuinely hungry for something more in life than the fulfilment of mediocre and philistine desires. Zarathustra searches for the seekers, and he has no trouble finding and attracting such individuals. At this point his career as a teacher begins in earnest.

Part one of Thus Spoke Zarathustra consists almost entirely of the twenty-two speeches that Zarathustra delivers to his disciples and followers. The speeches elaborate the philosophy of the overman. Their main line of thought can be summarized in the following six points:

(1) Zarathustra's most basic contention is the sweeping rejection of all metaphysics-of the idea that there is a "real" world "behind" the physical world, a transcendent world beyond the world of the senses. For Zarathustra there is only one world, and that world is essentially physical. Zarathustra is a materialist monist, in other words, he rejects dualism in it’s philosophical as well as in its religious forms. Plato, Descartes, or Kant is as unacceptable to him as Christianity or any other metaphysical religion. "Be faithful to the earth!" he admonishes his followers time and again.

In several speeches Zarathustra spells out implications of this basic contention. Priests of metaphysical religions, for example, he calls "Preachers of Death," because in their teachings they imply that there is something better than the earth and its life forms. They kill true reverence for life. They do so because they are afraid of life, or because they have failed to come to terms with it.

(2) Corresponding to Zarathustra's materialist monism is his rejection of the traditional dualism of body and mind: People do not have bodies, but they are bodies. Human beings are not composites of a physical and a nonphysical substance, but whole organisms, although these organisms are often very intelligent, and capable of deep feelings. Human behaviour is much more intelligible if it is understood as the behaviour of bodies, and not as behaviour that originates in pure minds. People are generally much more physical than most individuals - under the influence of metaphysical teachings-are inclined to admit to them or to others.

In speeches on a variety of topics Zarathustra encourages his followers to acknowledge their physical nature, and to live out of its power and resources. Books that are "written with blood," for example, are better than the seemingly detached and purely cerebral works of most academics, and works of art that draws on the pre-rational powers of the unconscious mind are deeper and far more powerful than those that are created by the rational mind. The instinctual passions that grow out of our physical constitution are truer to life than most of the constructions of the intellect. (It is worth remembering here what Nietzsche writes about the origin of art in his The Birth of Tragedy: Greek tragedy was powerful as long as it grew out of Dionysian intoxication and Apollinian dream visions. It deteriorated - at the time of Socrates's teachings when playwrights became calculating craftspeople, instead of inspired visionaries.)

(3) Zarathustra advocates an - asserting individualism that by most standards would be considered reckless and immoral. Zarathustra has no interest in virtues that promote social peace, or a culture in which people place a high value on not upsetting or offending each other. Peace of mind is suspicious because it may come about at the price of muffling the real forces of life. Individuals whose thoughts and deeds are to reach great heights have to go into real depths: "With a person it is as with a tree. The more he aspires to the height and light, the more strongly willing his roots strive earthward, downward, into the dark, into the deep-into evil." Outstanding spirits need to disregard the moral rules and sensibilities of the "herd." "Beware of the good and the just! They like to crucify those who invent their own virtue for themselves-they hates the lonely one." The more uncompromising people dare to follow their own individual inspiration, the more significant will be the results. A true view and appreciation of life are not "clouded" by moral categories at all: Life in its purest and highest manifestations exists "beyond good and evil."

(4) The price for this sort of individualism is a pervasive antagonism of forces and people, perhaps evens "a war of all against all"(to use Hobbes's phrase). Nevertheless, that is nothing bad in Zarathustra's eyes. Every living being motivated by a "will to power," by a will to assert it, and struggle is an inevitable expression of being alive. "War is the father of all things," Heraclitus once wrote, and in agreement with this Zarathustra thought that nothing worthwhile would ever come about without strife. "Live dangerously!" is the advice that he gives to his friends? Even in love relationships risks must be taken. Getting hurt in a love relationship is nothing to be afraid of or bitter about, but rather an opportunity to grow and to respond creatively. "War" is not only an acceptable means, but also an important end in it: "You say it is the good cause that hallows even war? In say: It is the good war that hallows every cause." To live a warrior's life is a supreme way of being.

This must not be misunderstood, however, as an advocacy of the sort of militarism and nationalistic expansionism that began to run rampant toward the end of the 19th century. The "warriors” that Zarathustra praises are not a man in uniform, and not part of the mechanized fighting machinery that has become the hallmark of modern warfare. In his speech "On the New Idol" Zarathustra explicitly repudiates such things as patriotism or identification with a particular nation state as a vulgar form of - alienation: "Only where the state ends, there begins the human being who is not superfluous.

(5) - determination is crucial at all levels of Zarathustra's philosophy. -determination has been an important ideal in other philosophies as well, to be sure, particularly in the philosophy of the Enlightenment, a movement that is in several ways incompatible with the thought of Zarathustra. What the Enlightenment and Zarathustra has in common is the idea that a moral order cannot be imposed on human beings from the outside-by authorities, social institutions, or traditions, for example. However, in Zarathustra's philosophy -determination becomes a much more radical concept than it is in the writings of Kant or other Enlightenment thinkers. For in Kant's ethics the goal is still to find moral rules and guidelines that are ‘objectively’ valid, rules that are binding for all rational beings because they are grounded in the very nature of rationality. For Zarathustra there is neither a divine authority that could impose binding values, nor a recognizable cosmic order on which objective values could be based, nor a rationality that is common to all human beings. Thus human beings are not only independently responsible for living up to moral standards, but also for creating such standards in the first place. For Zarathustra nothing is ‘given’, neither a moral order, nor a preestablished meaning of life or of the universe. Any such thing has to be brought about by the creative will of individuals who are capable of such feats, such as Moses or similar lawgivers - determination, in other words, is not just a matter of exercising autonomy in a structured and established world, but almost something like creating a world out of chaos.

A sign of such far-reaching -determination is free death. A truly autonomous being will not wait until death "sneaks in like a thief," but freely decides when it is time to go-which should not be either too early or too late. The time of one's death ought to be connected to one's meaningful tasks, to the things that one has chosen to accomplish. When these goals have been reached, and when nothing significant can be done anymore, then a sovereign person will say farewell to his people and life, and not wait until his or her life will degenerate into nothingness. The important point is to be active where formerly people have been passive. Fewer things are given than had always been presumed. A future humanity would be in command of it to a degree that had never been imagined in the past.

(6) Life is a process, not a state. A person is a process, too, not a static entity. To conceive of one as an entity, as a substance, is a mistake. To live life as if one were a being, rather than a becoming, is a falsification of one's existence that is connected with the illusion of an everlasting life in a "transcendent" world. Living life is not accomplished by holding on, by accumulating things or knowledge, but by always overcoming one, and by transforming or passing on everything that one acquires.

At the end of Part one Zarathustra depart his followers to return to his mountain cave. His main reason for doing so is the necessity of his disciples to find themselves-to cease being followers. Part of the idea of the overman is, after all, the idea of radically living out of one's own, and not out of any doctrine or consensus of a community. To be true to his teaching Zarathustra has to stop being a teacher. All he can do as the prophet of the overman is to sow the seed of his idea, and then see what will develop.

Part two, years later Zarathustra has a dream: A child holds a mirror up to him. In this mirror Zarathustra does not see him, but a derisively laughing devil. Zarathustra is deeply disturbed by that vision. He interprets it as meaning that his teaching is being distorted. He eagerly decides to return to his followers and to speak within their spoken exchange that once again -and to his enemies as well. He feels he is full of wisdom that he wants to impart. "Too long In have belonged to solitude; Thus, In have forgotten to be silent." The reader gets the impression that Zarathustra is just a bit too eager to resume his teaching career. Zarathustra may, in fact, have given a wrong interpretation to his dream, and his eagerness to give more lectures to his followers may cover up something that tried to make it manifest by the vision of the mirror.

Zarathustra descends to the Blessed Isles, the place where his followers live, and where he is welcome to develop the ramifications of his philosophy further. A major new strand of his thoughts is the concept of the "Will to Power," the concept that dominates all of Part two. Zarathustra sees the Will to Power as the most basic motive force in all living beings, justly of transcending importance, as steadily as there be of its drive for the will to live. It manifests it in innumerable ways-in the way certain people assert themselves in society, as well as in the power of an ascetic priest over his own appetites or an artist’s mastery over the elements of his or her work. As good as science is seen not so much as a disinterested reflection of what is the case, but as a forceful construction of data along the lines of certain preconceived concepts (such as the unified structure of Newtonian physics).

Halfway through Part two, however, in the "Nightsong," Zarathustra changes his tune, so to speak. Instead of lecturing he begins to sing. What he sings at first, he laments about being too much a carrier of light, too much a giver of wisdom. Something important is missing in his life. Zarathustra is craving for darkness-presumably for the instinctual or unconscious side of human existence. He conducts him too much like Apollo, and too little like Dionysus. Instead of being the teacher of a new civilization he needs to experience the extacies and agonies that come with the intoxicated submersion into the primal spheres of life.

In the following "Dancing Song" Zarathustra deepens his doubts. While admiring and encouraging the dance of a group of young women, he asks him whether he really understands life. Implicitly he calls into question the validity of his strident teaching. In the "Tomb Song" he tentatively acknowledges that the truth of life will not reveal it to him through philosophizing and teaching, but in such instinctual expressions as singing and dancing.

After this crisis experience Zarathustra resumes his usual lecturing for a while, but in the section on "The Soothsayer" he encounters his doubts once more. The Soothsayer is a persuasive spokesperson for the nihilism that besieges modern humanity. His message is that ultimately everything is futile and vain. He represents a pervasive weariness and a state of disillusionment that Zarathustra cannot escape: What sense is there, indeed, for working so hard to bring about the overman? Is his advocation really different from all the other cultural efforts that now constitute a dead past?

In a lugubrious dream Zarathustra sees him as the warden of the remnants of past cultures in "the mountain-castle of death." In this dream a sudden storm tears open the gate of the castle, the overturning a black coffin from which escape grimacing "children, angels, owls, fools, and huge butterflies." Terrorized, Zarathustra awakens. He wonders what the dream may mean. A disciple suggests that the storm symbolize the work of Zarathustra-the destruction of a dead culture, and the release of new energies. Zarathustra is doubtful, however. He is not sure whether he may not rather be part of "the castle of death." Even as the teacher of the overman he may be more part of the old civilization than part of the liberating forces of the future.

Continuing his journey with his followers, and Zarathustra has occasion to converse with a rather observant hunchback. This hunchback tells Zarathustra to his face that "Zarathustra talks differently to his disciples than he talks to him." This finally brings home to him that something is seriously wrong. There is something that he does not tell his followers, something that he does not even admit to him, even though he seems to have an inkling of it. The days of Zarathustra as a teacher are clearly numbered.

In "The Stillest Hour," the last section of Part Two, Zarathustra is arguing with a "voiceless voice," a voice that brings him to the realization that "Zarathustra's fruits are ripe, but that Zarathustra is not ripe for his fruits." There is a discrepancy between his teachings and his being, and its change clearly releases him, in that he has to change. In a deeply depressed state he decides to leave his followers once more.

Part three, from now on Zarathustra is by him. He is a "wanderer" who tries to get ready to meet the most difficult task that he has to face in his life. "Before my highest mountain In stand and before my longest wandering. To that end In must first go down deep than ever have it descended-deeper into pain than In ever descended, down into its blackest flood." Although Zarathustra never describes it that way, he is, in fact, readying him to die to his old as the teacher of the overman and to become that new kind of being. "If you now lack all ladders, then you must know how to climb on your own head; How else could you want to climb upward? On your own head and away over your own heart - up until even your stars are under you.”

Zarathustra does not return to the solitude of his mountain cave right away, but rather embarks on a long journey across the sea and through the big cities. While crossing a mountain range to reach the next seaport, he begins to deal with the "Spirit of Gravity" that keeps weighing him down"my devil and archenemy, half dwarf, half of a mole, lame, making lame, dripping lead into my ear, leaden thoughts into my brain. " What the spirit exemplifies at this point is the thought of the futility of Zarathustra's project, the futility that the Soothsayer had already hinted at earlier: "You philosophers’ stone," the Spirit of Gravity whispers mockingly, "you threw your very high, but every stone that is thrown must fall.

Zarathustra gets the dwarf off his back by confronting him and him with the thought that he had been so reluctant to think, but which seemed to have been on his mind for some time - the thought of the eternal recurrence of everything. That thought and its unsettling implications are the predominant concern of Part Three of Thus Spoke Zarathustra. According to this philosophical concept everything in the universe is bound to repeat it endlessly because time is endless, while the amount of matter that exists in time is finite. The number of possible configurations of the constantly changing elements of matter may be enormous, but eventually they will have to repeat themselves. Everything that exists must have existed before; the future is like the past: On a cosmic scale there can be no progress. Time is not linear, but forever moves in circles. "All that are straight lies," the dwarf agrees. "All truth is crooked. Time it is a circle."

The thought is profoundly disturbing to Zarathustra, for it means that even a successfully created culture of overmen is not something like a new plateau from whichever new heights of human accomplishments can be reached, but only a phase in a sequential cycle, that in time will bring back even the lowest stages of human development. The thought that everything recurs seems to take away any incentive for effort. Why work toward the overman if after that nothing but the old degeneracy looms?

Zarathustra's profound disgust with the prospect of the eternal recurrence of low forms of humanity finds expression in his vision of a young shepherd who is gagging on a black serpent that has crawled into his throat. Attempts to dislodge the serpent are futile. "Bite off its head" Zarathustra finally yells, and the Shepherd does as he is told. Spitting out the head the Shepherd is a new man, a man whose belly’s laughs a tremendous laugh of liberation. From the moment of this vision on Zarathustra has one over-arching desire: To achieve this laughter of liberation, and thus steadily disentangles for good of the Spirit of Gravity.

Zarathustra continues his travel-a journey through the wasteland of modern civilization. In the end he finds the shallow and escapist culture of his contemporaries not even worthy of critique or rebuttal: Neither scholars nor the literati (let alone the journalists) come even close to dealing with the really important questions of life. Passing everything over in silence seems to him to be the most adequate response. He returns to the mountains to resume work on him. While becoming a hermit again, Zarathustra is careful not to turn his back on life. Instead of subscribing to the traditional virtues of ascetic monks-poverty, chastity, and obedience-he continues to advocate the vigorous living of life with everything that may imply. Zarathustra is still in agreement with what he had said in Part Two: "In do not permit the sight of evil to be spoiled for me by your timidity. In am delighted to see the wonders hatched by the hot sun-tigers, and palms and rattlesnakes. Among men, too, a hot sun hatches a beautiful breed, and there are many wonderful things in those who are evil. Zarathustra still aims at the goal of the overman.

Part three, ends with Zarathustra's recovery from his crisis. The way he overcomes the debilitating implication of eternal recurrence is by emphatically living in the present. If time is a circle, it does not really matter in which part of the circle one exists, or in which phase of its development humanity finds it. "Being begins at every moment” . . . The centre is everywhere," Zarathustra's archetypal animals, the snake and the eagle, sing, and Zarathustra agree. Most people live in the past, i.e., under the constraints of traditions, inherited moralities, etc. Zarathustra used to live in the future, i.e., in expectation of a culture that has never existed before, and which would be part of a never-ending progress. Nevertheless, by now the teacher of the overman knows that ultimately past and future are irrelevant, that living one's life is something that has to happen now, and not at any other time. It is now that the struggle takes place, and now that life manifests it in the intensity of one's efforts. The concept of eternal recurrence is not a paralysing thought anymore, but the joyful vision of a new secular eternity.

An important sign of Zarathustra's recovery is the fact that he has learned to sing and to dance. Singing and dancing, compared with speaking, are ecstatic modes of expression. Speaking tends to be a disembodied mode of communication, while singing and dancing involves not only the intellect, but the body and its passions as well. A person who is capable of singing and dancing is whole, and life is more present in such a person than in a lecturing teacher. It is in the light of this newly found wholeness that one can see why Zarathustra felt at one point that in spite of his upbeat teachings he was part of "the castle of death."

The first part of Thus Spoke Zarathustra is dominated by Zarathustra's vision of the overman, the vision of a bright and heroic future. It can be called Apollinian, as it aims at the building of a civilization out of the chaos of cultural entropy. No civilization is eternal, however. The dark and chaotic underside of every order cannot be ignored, and it will eventually assert it. The day of Apollo does not exist without the night of Dionysus. The night, in fact, is darker and more powerful than day-consciousness cares to think. Because the dark forces of life are so frightening, people have a tendency to shun life, to look at it, as something painful or even evil-something to overcome. It is part of Zarathustra's teaching to affirm life in spite of its frequent darkness and potential terrors. The transformation of the protagonist that dominates the last part of Thus Spoke Zarathustra demonstrates a love of life that encompasses not only its dark sides, but even its ultimate purposelessness. It is a love that is achieved by living life-after a long period of merely thinking and teaching about it. It is a seeing love, a love that feels and knows at the same time

Scholars are debating whether part four should be seen as an integral component of Thus Spoke Zarathustra, or rather as the beginning of a longer continuation that Nietzsche never got around to writing. The first three parts evidently constitute a beginning, middle, and an end, to which the fourth part is in some ways something like an appendix. The first three parts could easily stand by themselves. The fourth part is interesting, however, in that it shows Zarathustra as an old man who is still intent on teaching the overman. Throughout part four he never leaves the mountains: He has adopted the strategy of letting interested people find him, and they come. The cultural situation in the lowlands has become so bad that seekers are desperate to find a way out. Zarathustra converses with a number of "higher men" who have begun to look at him as a spiritual authority. Zarathustra gives advice to these figures, and in the process further analyses the general situation of modern humanity, but in the end he concludes that even these leading intellectuals are hopeless: "These are not my proper companions. It is not for them that In wait here in my mountains."

No comments:

Post a Comment