readings> a bifold model of freewill

Paper that appeared in the Journal of Consciousness Studies, August 1999

ABSTRACT: The folk psychology view of the faculty of freewill is that it is innate, unitary, structureless and, of course, free. A bifold1 approach to the mind, as taken by Vygotsky, Mead, Luria and others, argues that like all the other higher mental abilities of humans, freewill is in fact largely a socially-constructed and language-enabled habit of thought. There is a neurology for this habit to latch on to - after all, the 'raw' animal brain is built for acting rather than contemplating. But it is the social superstructure - the habit of monitoring and even directing our planning behaviour - which creates much of the traditional mystery. Indeed, ironically, it is actually central to the socially-constructed Western 'script' of freewill that we deny the social origins of this ability to take charge of our own brains

introduction

Freewill seems a straight-forward enough business. I feel the mental effort of making a choice and anyone who tells me my choices are predetermined can quickly be proved wrong - I will simply do the opposite of what's expected. Of course there are a few mysteries. When I crook a finger or raise a hand, it is hard to be sure how I really made these simple actions happen. Alternatively, when I want to get out of a cosy bed on a cold winter's morning, willing the equally simple act of throwing back of the covers becomes curiously problematic (James, 1981). So there are some complexities to the story. But the ancient tripartite division of the mind into thought, feeling and will seems indisputable (Ryle, 1949). Nestled somewhere in the humid folds of our brains must be a moving soul-stuff or at least some clever neural machinery of volition.

Such is the folk psychology (Morton, 1980) view of volition and already many of the standard suppositions about the nature of freewill are apparent. It is seen as a unitary faculty - any differences are of degree rather than kind. It is innate - all humans are born with the power, although its shoots may need nurturing to grow healthy. It is dimensionless - the willing of an act is a point-like event, clearly separate from deliberations that may have preceded it. And free means free - don't you dare call it an illusion.

As will be seen, this prickly response to the threat of deterministic-sounding explanations of freewill is in fact a big clue to its true nature. But first, how should science tackle the problem?

Many would favour the normal reductionist route. Focus on the smaller components out of which the complex apparatus must be built. Thus we see research which looks for the brain bump that becomes particularly active when a person shifts a joy-stick in one of four possible directions (Spence et al, 1997) and attempts to track the emergence of a 'spontaneous' impulse into bright awareness (Libet, 1985).

But an alternative is instead to look to the big picture and work back from a more general model of the human mind and its various 'mental faculties'. The argument made here is that the human mind is bifold1 with socially-constructed habits of thought being the software that exploits resources to be found in the biological 'hardware' of brains. And so therefore far from being innate and unitary, the faculty of freewill turns out to be as much a social idea as a neurological process.

Of course, going for the big picture is to risk seeming sketchy on the details. Particularly when talking about the convoluted history of the Western concept of psychological faculties (Danziger, 1997) or the vexed question of how language may interact with the brain to produce thought (Carruthers and Boucher, 1998) the account given here will seem overly generalised. But the intent is really to demonstrate how different an old problem might look once placed in a new, more broadly-based, setting.

the general argument for a bifold strategy

It is a commonplace that the human mind is shaped by culture (Jahoda, 1992). But the normal assumption is that the effect is merely 'horticultural' (Vygotsky and Luria, 1994; McCrone, 1993). The mental faculties of humans are believed to be innate, being genetically-formed and so present in seed form at birth. They need only a modicum of watering and perhaps some later judicious pruning to achieve their glorious flower. Therefore the impact of cultural factors on some essential ability like freewill would be superficial.

But a more radical line has been taken by a series of authors (Müller, 1888; Vygotsky, 1978, 1986; Luria, 1976, 1982; Mead, 1934) and has found modern expression within psychology and sociology in such camps of thought as the socio-cultural school of cognitive development (Zivin, 1979; Wertsch, 1991; Diaz and Berk, 1992), the social constructionist movement (Coulter, 1979; Berger and Luckmann, 1979; Gergen and Davis, 1985; Burr, 1995), and most recently in the philosophy of language and thought (Dennett, 1991). Their argument is that we are born with naked animal brains, then through the learning of language, we are able to internalise an apparatus of socially-formed thought habits. These language-enabled skills give us all our distinctively human abilities such as self-awareness, recollective memory, socialised emotions and structured thought. We become loaded with a mental software that can drive our brains to places they would not normally go.

In this view, the 'raw' animal brain has consciousness, but it is a consciousness locked into the present tense (Walker, 1983; McCrone, 1999). Even a casual acquaintance with animals tells that they do not lack for intelligence or awareness. They can associate, anticipate, recognise, choose, inhibit, show emotion - the full range of mental responses. But these responses are to the events of the moment.

Few people would claim that a cat lazing on the lawn is likely to be reminiscing about the mice it used to catch in its youth or making plans for the evening about how to get its own back on the tom down the road. A cat lives in the moment and something has to happen - the rustle of a mouse or an upswelling pang of hunger - for its mind to be stirred into a response. The philosopher Ludwig Wittgenstein summed up the situation neatly when he asked: 'We say a dog is afraid his master will beat him, but not that he is afraid his master will beat him tomorrow. Why not?' (para. 650, 1976)

Admittedly this argument is much easier to sustain for a creature like a cat than a more active animal like a chimpanzee. But while the complex social lifestyle of chimps demands a more constant level of mental alertness and response (de Waal, 1982), there is still no behavioural evidence that they can rifle freely though their memory banks to recall childhood events or pursue contextually-displaced lines of thought, such as worry about who will care for them in their old age (Walker, 1983). Language, when it evolved in humans, made a difference because it could be used to lead the brain out of the current moment and turn its responding powers onto displaced or even entirely imaginary contexts.

Words obviously have the power to summon knowledge from a suitably-trained brain. Hearing a word like rhinoceros or oil-well instantly begins to evoke images and unlock associations in the mind. And combinations of words can be used to rouse highly specific contexts - compare saying simply rhinoceros to 'a rhinoceros dancing in a pink tutu' or 'that rhinoceros image which sprang to your mind just a moment ago'. And because words are just puffs of air - vocal acts of standard dimensions - it is as easy to use a word to open a door into thoughts about something vague or abstract, like love, life, or the Universe, as something concrete, like rhinos or oil-wells (de Saussure, 1959; Aitchison, 1994).

The other part of the machinery of language is grammar - the ability to take a complex situation and break it down into a sequential tale of cause and effect. The common feature of all human grammars is that they demand people talk in sentences - speech units which are only complete if they contain a subject, a verb and an object (Chomsky, 1965). The order of these components may differ across cultures (Greenberg, 1963) and sometimes a component may be implied rather than uttered (Bloom, 1970). But the aim remains to tell an agent-based tale of who did what to whom - to take the analog complexity of real life and express it in a serial, digital, stream of words.

The importance of grammar is that it acts as an engine to thought, giving it an in-built momentum. We do not simply murmur rhinoceros and leave it at that. We feel obliged to extract a sentence from the situation - such as 'my, that's an angry looking rhino' or 'hasn't he got a dusty skin?'. We pull out new words - like angry and dusty - that can then can become the stimulus for further sentence forming (Jackendoff, 1996). Rousing the idea of dustiness might bring with it associated images of animals rolling in dust-bowls or knowledge about skin parasites. As we comment on this unlocked knowledge in turn, our minds will be moved even further away from the events of the current moment along some independent train of thought (McCrone, 1993, 1999).

It should be noted that in this view, there is not the traditional difficulty about which comes first, the word or the thought (MacNamara, 1977), as the argument is that words provoke ideas and ideas provoke more words in a self-fuelling cycle of mental activity. Language only leads in the sense that without its formal structure, it would be impossible to scaffold the brain's natural thinking abilities and direct them away from a response to the current moment to a response to imagined or remembered moments (Clark, 1998; Dennett, 1998).

The question of when language evolved and how much of its structure is genetically-determined are obviously contentious issues. Suffice it to say that there is reasonable evidence that only Homo sapiens had the necessary vocal equipment for articulate, grammatically-complex, speech (Lieberman, 1991). And the genetic component probably has more to do with the fact that the human brain is generally good at sequenced motor acts (Kimura, 1979) and that the infant brain finds it almost impossible to avoid extracting grammatical rules from the most rudimentary speech input (Elman et al, 1996; Saffran et al 1996), rather than because grammar is, in some Chomskian sense, wired in2.

But what really matters here is the use to which language ability is put in driving minds. And it is crucial to note that the effect of language can be both overt and covert. It is overt when we speak to rouse specific contexts in the minds of others or use self-addressed sub-vocal speech to rouse new contexts in our own. But the words we learn, and the networks of meaning which become attached to them, also texture our brain circuitry. In simple terms, they carve out memory paths that would not otherwise be present in our heads.

One of the great surprises of recent neuro-imaging has been the discovery that the human brain responds to words with highly distributed patterns of activity - words are not decoded within a single module such as Wernicke's area but rouse activity in far flung corners of the cortex. So for example, when subjects are shown a black and white drawing of an object like a pencil and asked to associate either an action or a colour with it, saying yellow stirs the colour processing areas of the visual pathways while saying write stirs motion perception areas (Martin et al, 1995).

The precise way the brain represents different classes of words is still in some doubt (see Damasio et al, 1996; Pulvermüller, 1999). But the general message is that the structure of words and grammar penetrates deep into the organisation of the brain. Categories and schemas that might have evolved culturally can become part of the brain's apparent biology. And indeed, some have argued that the greatly delayed maturation of the human brain (Lenneberg, 1967) - which makes human infants so dangerously helpless for the first few years of their lives - may be a positive adaptation to allow each individual brain become imprinted with the language rhythms and thought habits of the culture into which it finds itself born.

Thus while the bifold model claims that the origins of mental behaviour has two sources - one genetic, the other cultural - in practice the boundary becomes blurred as the software comes to texture the hardware. Ways of organising knowledge that may have evolved within society can come to form associative networks in our brains and then do hidden work in driving our thoughts in particular directions or to leap to certain ideas. So even if ideas to do with selfhood and willing might have a social, word-encoded, origin, they too could come physically to mould the brain's neural landscape in ways that conceivably would show up in suitably deft neuro-imaging experiments3.

unravelling the faculties

What does this then say about the mental faculties of humans? The bifold model claims that behind every apparently unitary and innate faculty there will be a set of raw hardware powers - the abilities every animal brain can bring to bear on the processing of a moment - and then the language-based habits that humans create to extend these powers in desired directions.

Taking memory as an example, the argument would be that the brain has evolved to recognise and associate - to apply memory in finding meaning in the moment. But to recollect - to reconstruct earlier moments of awareness or recall knowledge independently of context - we must prod our memory banks (or neural landscapes to use a more appropriate metaphor) with the organised use of words (Whitten and Leonard, 1981). Quite simply, to remember what we ate for breakfast, we must ask ourselves the question and wait to see if the words jog a set of impressions to life. Through association, the word breakfast should rouse typical breakfast-like experiences like toast or cornflakes. As such possibilities are roused, the brain can then recognise the correct choice. It may even generate a 'snapshot' memory of ourselves raising a slice of marmalade-smeared toast to our mouths. Or to be more accurate, the brain may generate a vivid anticipatory image (Neisser, 1976) of what it would be like to be back in that moment - an image that would be more a best-guess reconstruction than a faithful mental replay of prior events (Bartlett, 1932; Neisser, 1967; Conway, 1990).

So we humans can roam our internal memory landscapes 'at will' because we have words to rouse contexts - to make guesses and suggestions - and brains that then will respond to those contexts with a click of recognition and a flow of associations or anticipatory images. Voluntary remembering feels like an effort because we have to pay attention to the process and keep prodding our brains until they give up the desired information. But we also have involuntary remembering when it is our current environment that triggers the association. We might meet an old schoolfriend one day and a host of long forgotten memories about schoolday escapades will come flooding back.

All the mental faculties can be broken down in this fashion - and in so doing, come to be seen as different ways of looking at the same basic bifold interaction (McCrone, 1993, 1999). Imagination divides into a natural brain ability - the ability to anticipate or stir up a state of quasi-perceptual priming (Rao and Ballard, 1999) - and the use of words to provoke the brain into generating such anticipatory states independent of any current context. Reason divides into the natural brain ability to associate and anticipate, and the use of words to direct such powers in the pursuit of goals not dictated by the immediate environment. Even consciousness itself breaks down into raw awareness - the subjective experience of responding to the demands of the moment - and the learnt ability of humans to introspect - to step back from the flow of each moment and use the power of words over thought and memory to contemplate the fact of having subjective experiences (Mead, 1934).

It has to be admitted that despite the best efforts of psychologists like Vygotsky and Luria, the evidence for such an understanding of the human mind remains more circumstantial than direct. While there may be some 'natural' experiments to consider (Sing and Zingg, 1941; Lane, 1976; Luria and Yudovich, 1956; for review see McCrone, 1993), it is plainly impossible to carry out the required controlled test in which a sample of children are deprived of exposure to language and culture in order to discover the effect on their mentality. However the study of the development of self-regulation in children (Diaz and Berk, 1992), across cultures (Luria, 1976), and particularly in the field of memory (Vygotsky, 1978), coupled with a careful consideration of the mentality of animals (Walker, 1983) and the implausibility that the genome could code for the level of mental structure claimed for humans (Mueller, 1996), does allow a strong case to be made.

As the philosopher Andy Clark (p. 182, 1998) put it: 'Language [is] the ultimate upgrade: so ubiquitous it is almost invisible; so intimate, it is not clear whether it is a kind of tool or an added dimension of the user. But whatever the boundaries, we confront a complex coalition in which the basic biological brain is fantastically empowered by some of its strangest and most recent creations: words in the air, symbols on the printed page.'

the social construction of mental habits

The bifold approach goes beyond saying that the higher mental abilities of humans are merely language-enabled. It says the kinds of concepts and mental habits we are taught are socially constructed - they evolve socially and exist primarily for the furtherance of society. The software is loaded into the brains of each new generation not for the benefit of the individual but because minds primed to think in certain ways have proved to be successful for a particular social group, in a particular habitat, in the classical Darwinian struggle of life.

The Social Constructionist school in psychology and anthropology (Burr, 1995) gives a way of studying this fact by focusing on the vocabulary used by a society - particularly words to do with emotions and personal qualities - and showing how these words encode socially-useful ways of thinking that can then be transmitted efficiently across generations

This role of vocabulary is easiest to appreciate when the language of another culture or time is being considered. For instance, Japanese culture has many words that have a surprisingly alien ring to Western ears. There is the emotion of amae - the sweet feeling of being helplessly dependent (Morsbach and Tyler, 1986). Amae exists naturally between son and mother, but is also valued between worker and manager or pupil and teacher. For those reared in the emotional economy of Western culture, with its emphasis on individualism and self-reliance, the idea of helpless dependence sounds far from sweet. And the behaviour implied by 'feeling' amae towards a social superior - the putting on of a show of fawning babyish ineptitude so as to bring out a motherly response - sounds positively distasteful.

The Japanese have many such terms for describing fine nuances of emotion which relate to situations that are socially-valued in their culture. There is ijirashi - the feeling that comes from witnessing a worthy person overcoming a difficult obstacle. Or on - the sense of debt that every individual gladly feels towards family and ancestors. By contrast, Western words that describe passive, dependent, social relationships (like to fawn or to presume) tend mostly to carry negative overtones - they spell out ways not to behave. In their stead, Western culture is rich in words that glorify the feelings of being an individual (with an individual will) such as independent, strong-minded, extrovert, leader, dominant.

By the same token, with the Westernisation of Japanese youth has come the adoption of this Western terminology. So, for example, the importation of the Western idea of love as a magnetic attraction between two free spirits has necessitated the importation of Western words as well. Japanese couples now talk about being happee with their partners and living in the cosy glow of a romanchiku moodo (Buruma, 1984).

The effect of such role-describing words on the new minds being born into a particular society is subtle yet strong. As the sociologist Charles Cooley wrote (p. 69, 1912): 'Such words for instance as good, right, truth, love, home, justice, beauty, freedom, are powerful makers of what they stand for. "This way," says the word, "is an interesting thought: come and find it." And so we are led on to rediscover old knowledge.' The social constructionist argument is that a child learns the words and through the words gets a handle on the social role and kinds of behaviour the words imply.

the general value of 'willed' individuals

Already suspicions should have been raised about the nature of freewill - about the reality that lies behind the deceptive simplicity of a single emotionally-charged word. But to make sense of the cultural side of the bifold story, we need to dig a little deeper and ask what is the evolutionary value in creating 'willed' individuals? Why foster an apparent power for choice, and so deviance, when it might seem that blind social obedience would have a greater pay-off?

Paleoanthropologists have often noted that the great apes seemed to be heading down an evolutionary blind alley. The big brains of apes might seem a good thing, but the metabolic cost of feeding such a hungry organ (Kety and Schmidt, 1946), and the years needed to train it, means that species like gorillas and chimpanzees produce very few offspring. A chimp mother normally manages one infant every three to five years or barely half a dozen in a lifetime (Sluckin and Herbert, 1986). Such a low replacement rate puts the apes on a knife-edge of survival. And early Hominid species, despite bipedal walking, opposable thumb, and their other physical adaptations, would presumably have suffered from the same reproductive mathematics.

The way out of this bind would have been the emergence of some level of self-control. Comparing ape and human lifestyles, it has been argued (Johanson and Edey, 1982; McCrone, 1990) that the key to early hominid success was the development of a new social style - one based on food-sharing, a division of labour, reasonably monogamous pair bonding, and co-operative child care. If the modern-day hunter-gatherer society stands as a good model (Service, 1962), then the solution to the Hominid problem of raising large numbers of helpless and hungry-brained infants was to mix high-risk hunting with a low-risk foraging strategy, then ensure the systematic sharing of the proceeds.

Sharing obviously implies self-control. There is some evidence of group hunting in chimpanzees with a sharing of the spoils (Boesch, 1990). But quite another level of forbearance is required for a human hunter to travel a day alone in search of game or honey then return with a heavy load to base camp to feed the greater group. Or indeed for those who stayed foraging close to home to save some food for a luckless and hungry hunter.

Food sharing is the most immediately convincing example. But the same principle applies to any other activity in which individual forbearance or willingness to tolerate risk is necessary to produce a pay-off in terms of group survival. Despite their considerable intelligence, chimpanzees cannot help but be largely selfish animals, seeking always to satisfy their current wants. And to the extent that they show deliberately co-operative behaviour, such as hunting or grooming, it is as a response to the demands and possibilities of the moment. However humans, once they had the language to pin down abstract concepts such as fairness and duty, could internalise the needs of their societies. They could make the individual mind a supervised place. As Mead (1934) argued, the 'self' that runs the show is really a social idea - the generalised other. We build up a picture of what is thought to be standard behaviour and standard goals within our culture then attempt to filter our actions through that internalised schema.

In this view, somewhat ironically, we are born without the habit of introspection and then become sharply aware of ourselves as individual consciousnesses because society places a duty of care in our heads. Using the structure of self-addressed speech to organise our thinking - to turn our attention on ourselves - we begin to reflect and deliberate, passing a social rule over our every impulse towards action (or inaction). And the more complex the social schemas - the more complex the inner negotiations we must mediate and outer mask we must present (Goffman, 1969) - the more conscious of being an agent with individual responsibilities and possibilities we become.

The reason why this active sense of self would be better than a simple programmed obedience is that a culture could not code for what would be the right course of action in every situation. So by setting broad goals and leaving individuals to negotiate the detailed answers, society would be ensuring the most creative solutions. The kind of freedom a society would be aiming to give its members would be the freedom to work as hard as they could to do general good for the society - to be good team players, in other words.

the Ifaluk view of self-regulation

The idea that self awareness (or the habit of introspecting and thus the forging of a sense of personal identity) is a social construct designed to make us creatively self-policing may be hard to stomach. But the social origins of 'innate' character traits such as forbearance, conscience or honesty is again perhaps easier to appreciate in cultures other than our own.

One culture whose self-regulation has been studied in depth by anthropologists (Lutz, 1986, 1988) is the Ifaluk, a group of 400 Polynesians on a half-mile square, hurricane-lashed, coral atoll in the Western Pacific. In good times, the Ifaluk can grow more than enough taro, breadfruit and coconut to survive. But hurricanes can bring unpredictable years of hardship. To cope, the Ifaluk have a tight-knit social code that promotes the sharing of food, the sharing of labour in the fields and even the sharing of children - the Ifaluk have an unusual policy of adoption where nearly half their children are fostered out to relations.

The general success of this social style can be judged by the fact that murder is unknown on the island - the most aggressive act witnessed by Lutz during her year there was when one islander seized the shoulder of another (an act for which he was immediately fined). And when she awoke one night to find a male intruder in her hut, her terrified screams had the islanders laughing for days afterwards. The man had simply lost his way in the dark while attempting to rendezvous with a lover. That Lutz might have any cause for alarm seemed quite outlandish (Lutz, 1988).

But it was the reason why the Ifaluk felt they were well-behaved that was significant. Lutz reported that when asked to explain emotion words - words that signified mental causation - the islanders always talked in terms of external situations and actions rather than internal feelings.

For example, one islander explaining four different types of unhappiness, said (p. 271, Lutz, 1986): 'If someone goes away on a trip, you feel livemam (longing) and lalomweiu (loneliness/sadness), and if you had nothing to give then [as a going away gift] you feel tang (frustration/grief) and filengaw (incapable/uncomfortable).' As Lutz noted (p. 283, 1986): 'While [Westerners] define emotions primarily as internal feeling states, the Ifaluk see the emotions as evoked in, and inseparable from, social activity.'

The Ifaluk were equally straight-forward about the reason for their good behaviour. The same word metagu (anxiety/fear) was used for the fear of physical threats like storms and sharks and for the fear of knowingly committing social transgressions. Rather than appealing to inner qualities, the language of the Ifaluk acknowledged that they acted out a sense of social pressure - and the thought of being socially-controlled carried no negative connotations within their culture.

A flip-side to the feeling of metagu was the feeling of song - a sense of justified anger when witnessing another doing wrong. If an Ifaluk male were to walk past a seated group of elders without bending respectfully, there would be shocked comments aimed at the individual (Lutz, 1988). Or if a woman's cousin failed to help in the fields as custom demanded, then the woman would denounce her cousin bitterly in front of the rest of the village. In modern Western society, the belief that goodness must come from within means that attempts to exert peer pressure are rarely so direct. To be scolded in public is to be treated like a child who does not know any better - an affront to our innate reason. But in tribal societies, public mocking and shaming are seen as entirely natural, not a fall-back mechanism to be employed when some individual's inner instincts have gone awry.

the Western model of freewill

By now we should be alert to the fact that the Western idea of volition or freewill might be a highly particular one4. And if there are philosophical confusions about the concept of freewill, they may be confusions actually written into the script provided by our culture. So what is the history of that script?

The Western folk psychology model of the mind begins with Greek philosophy, particularly the tripartite model of Plato (Dodds, 1963; Simon, 1978) and the similar, though not identical, views of Aristotle (Danziger, 1997). Plato introduced the idea of a particular psychic structure. The lowest part of the mind (epithumetikon) consisted of base appetites such as lust and hunger which inhabited the lower body. The nobler passions (thumoeides) beat in the breast while the highest part of the mind, reason (logistikon), occupied the head.

Although the Greeks divided the mind into these still familiar partitions, their view on the source of right-thinking action was perhaps rather different (Sorabji, 1993). For Aristotle, satisfying desire was basically good unless carried to excess. And the way to counteract any wrongful tendencies was simply clear thinking. Inspired by their success with mathematics, the Greeks believed the universe to be a lawful place and that the exercise of reason put humans in touch with those natural laws. The Greeks had no need for the concept of motivating willpower because it was clear or confused thought that automatically led them to their actions (Danziger, 1997).

Plato's tripartite model was eventually welded into place in Western culture with its adoption by the Catholic Church (Haren, 1985). The Christian faith - in contrast to others such as Judaism and Islam (Harré, 1983) - had always been distinctive in stressing the idea of an inner battle between spirituality and animality. For other faiths, as with the Ifaluk islanders, it was enough to observe social mores. But Christianity saw evil as something personal, something to be rooted out of the individual soul.

The reasons for this view are complex. One of the core beliefs of Christianity was that the soul departed the body at death, leaving behind the body's sensuous appetites and passions (whereas in many other mythologies, bodily form went with the deceased to ensure they could still have a good time on the other side!). So as scholars such as Plotinus, St Augustine and Aquinas reworked the Platonic model into a full-blown theology, the idea of a contest between good and bad - a battle of higher self over base impulse - became the abiding moral issue. The call was for a separation of spirit and flesh in life as well as in death. Salvation depended on an inner victory - achieving true purity of mind - rather than merely making an outward, or social, show of piety. So what for the Greeks had been a rather undemanding quest for a reasonable balance in behaviour (which found its expression in a literature of 'self-care' - Foucault, 1990), became the Catholic Church's tortured inner quest for sanctity of thought and deed, scaffolded by a social framework of confessionals, penances, parables and threats of eternal damnation.

However this very particular model of mind fostered by Christianity may have been more than just a fluke of a belief system. Taking the memetic view of religion as a self-propagating thought virus (Dawkins, 1993), an obvious key to the success of early Christianity was its portability across different cultures. By casting moral issues as a choice - do you continue to follow the pagan ways of your local culture, which are so obviously just fleetingly-held social constructions, or do you tap into the true order through the private exercise of your reason? - Christian theology acted to detach individuals from their existing social contexts and convince them to forge an allegiance to a new abstract moral context5. So the rapid spread of Christianity even in Roman times would have had much to do with the efficiency of a conversion mechanism based on the idea of a personal relationship with God and therefore personal responsibility for all acts - with the mirror image demand that individuals resist the coercive forces that might exist within their own now alien local culture. Out of this potent mix emerged a clear theme of individualism and inner contest, even if the Christian Church itself came to embrace many variants of the basic proselytising creed.

The general Greek view of the mind became the general Christian one. Then with the scientific revolution of the Enlightenment and the Romantic reaction it largely inspired, Plato's tripartite model became subtlety recast yet again.

The birth of science, with the deterministic physics of Galileo and Newton and the new clockwork contrivances of the day, led Enlightenment philosophers like Hobbes (1951), Locke (1975) and Condillac (1930) to see the mind as little more than a complex machine. Humans, they argued, were essentially civilised animals. It was language and education that turned them into rational, well-behaved, members of society. So being good was a matter of sober reason and social pressure rather than a mystical inner force. This realisation led reformers like Hutcheson and Bentham to seek better designed social structures that would produce better behaved people (Hampson, 1968). And the need for an evolutionary balance - a trade-off between individuals and their societies - was explicitly recognised in slogans such as creating the greatest happiness for the greatest number.

A particularly influential philosopher in the development of the modern idea of freewill was Hume. The Greek distinction had been essentially between clear and unclear thinking with no particular need for a concept of motivation - people did harm when their passions ran over and good when they saw the reason inherent in the Universe. But by making reason a matter of ordinary education rather than contact with the divine mind, the Enlightenment turned it into something dry and disputable. So looking for what might drive the mind instead, Hume (1967) popularised the idea of emotion - a mental motion or agitation. For Hume, there were both calm dispositions and violent feelings. But his crucial step was to strip away the social and rational context for human action and credit fundamental motive power to a set of internal, bodily, mechanisms.

Of course Hume was not wrong in suggesting that human biology comes with a basic repertoire of appetites and instinctual needs. But he was arguing that the whole 'package' of a socially constructed emotion such as love, honour, guilt or valour - complex feelings which describe both a socially-defined way of behaving and the kinds of sensations that normally accompany the playing of that role (Harré, 1986) - was something innate. More importantly, Hume helped to popularise the necessary jargon that eventually led to a highly distilled notion of the will as a pure and contextless motive force.

In many ways, Hume was ahead of his time. Generally speaking, the Enlightenment encouraged a more pragmatic view of the mind in which the shaping hand of culture remained apparent. So for example there was Locke's story on self-awareness as being a learnt construct - the perceiving of being a perceiver (Locke, 1975). But while the idea of rational behaviour brought about by the civilising influence of society carried the day in politics, economics and education, it did not triumph in popular culture. Instead, the Enlightenment generated its own backlash - the Romantic Movement - which eventually produced an even more charged notion of will.

One of the ironies of the Enlightenment was that it shocked people into realising that the rules of their society were man-made, not God-given, and so could be changed to make life fairer for all. But instead of embracing this fact, almost immediately writers such as Rousseau began to treat society's rules as if they were a strait-jacket on the soul (Mason, 1979). Petty reason was now something to be rejected. The real self lay in giving expression to what was natural - or increasingly, what seemed irrational.

Writers, poets, painters and musicians led the way, but soon philosophers like Nietzsche (1961) were joining in this rebellion against reasoned behaviour. In their writings, the Platonic tripartite model shifted ground yet again. Reason remained but was now demoted to lowly rank. The different types of feeling - base appetite and noble passion - became fused as Humean varieties of emotion. And the new, now dominant, component of the tripartite model became the will - a pure psychic force.

Paradoxically, this final reification of the will had much to do with contemporary scientific discoveries about magnetism, hydraulic pressure and other forms of 'invisible' force (Danziger, 1997). Science seemed to be saying that all systems were made of mechanically organised components which were then driven by some hidden source of energy. Many believed that life itself had a motive force - the élan vital. Likewise, an animating energy seemed necessary to fire up the brain, explaining how its machinery could be moved to act in often unpredictable ways. The connection became all the stronger with late-18th Century discoveries about the electrical properties of nerves. Physiologists like Bain (1977) spoke about nervous energy accumulating as the result of the digestion of food and this stored energy needing to find some discharge in action.

This glorification of pure will power - the essence of the individual mind - continued into the 20th Century. In science proper, the will actually began to turn back into something much tamer. Psychologists preferred to talk in terms of instincts and then later, drives or motives. With the rise of cognitive science, even motivation became a matter of deterministic computation - another dry information process - while the affective aspects - what emotions feel like or what cognitive structures they might underpin (McCrone, 1993) - became a matter almost entirely undiscussed.

But while science headed towards one extreme, popular culture was heading towards the other. Freud served up a highly successful version of the Romantic myth in which the id - the monstrous energy of the unconscious - was forever threatening to break through the defences of the rational but socially-repressed ego. French Existentialist philosophers like Sartre offered up another vision of the human condition with their calls for authenticity and the smashing of the iron band of bourgeois restrictions placed around the human heart.

And plainly, modern culture has become obsessed with images of the individual will in excelsis. From broken-down gun slingers to cartoon-like Rambo figures, the story is about being true to the self within despite the toughest social pressure to conform. Even clothing and manner have become a formal assertion of the possession of freewill. From teddy boy quiffs to modern day body piercings, the look is about social rebellion - if not actual, then at least potential.

Yet the point is not that modern society is sick or doomed. To a surprising extent, given the boldness of the imagery, people still remain embedded in a web of social ties and are well able to exercise self-restraint. Even those famous for their anti-social stance, such as New Yorkers or Hell's Angels, still rub along, living a life of daily small kindnesses and co-operation6. Also, it could be argued that the falsely heightened sense of individual will in Western society does have its own evolutionary payback. Many see the socially constructed emphasis on individuality as the driving force behind the great expansion of Western culture over the past thousand years7.

However it does emphasise how the roots of our impulse-filtering and decision-making behaviour have come to be disguised. It is central to the script of modern Western society that moral agency lies within and that this self is essentially free. Even our legal, political and educational systems depend on the assumption that it is fair to treat individuals as point-like moral agents, fully in charge of what comes out of their own minds. Of course, in practice, allowances are made for sanity, maturity and other extenuating circumstances. But the clear goal is to keep pushing individuals until they can live up to model of autonomous behaviour set by Western culture.

In thousands of tiny lessons - from the mother who crossly tells her son not to fuss when he falls off his bike and grazes a knee to the manager raising the weekly sales target for his staff - the message is to take charge of yourself and your feelings so society can then judge you hard for the quality of your self-regulation and decision-making.

so what is freewill?

Simply put, freewill is a socially-potent word to which we attempt to live up. The general need for willed - or at least self-policing - individuals has always been there in human history. Through much philosophical agonising that need has taken on condensed form in a single word. Problems then arise when we become entangled in the mythology - when we go looking for the pure motivating essence that is implied but is not actually there.

The greatest difficulty with freewill is that we seem to need a loophole that will allow a deterministic biological system - the human brain - to act in undetermined ways. Attempts to counter the bogy of Newtonian determinism (Ryle, 1949) can be seen in recent appeals to quantum mechanical effects (Zohar, 1990) or even the new mathematics of chaos (Trefil, 1997)8. But this is to fall for the myth of freewill as an entirely biological production, and not a bifold exercise that involves a heavy dash of culture.

In its strange way, human choice is indeed determined - it is socially determined because the cognitive and even emotional framework within which we make our choices is culturally evolved and inserted into us via social learning (Harré and Gillett, 1994). And yet this social script demands that we feel autonomous and so have some capacity to rebel. Indeed, because knowing what is the socially approved course of action always implies its opposite - knowing what we should not be doing - the greater the socialisation of the individual mind, the more heightened becomes our sense of making active choices. Children will rush noisily about in public places or fail to say their pleases and thank-you's quite unself-consciously. But for most adults, even tiny social transgressions are something they cannot help but think about. Even an apparent omission of self-control - like dropping an empty sweet wrapper on the ground - often must follow a moment of inner deliberation.

Finally, in a broader sense, culture does actually free us from the 'locked into the moment' minds of an animal. The social world into which we are born provides us with a highly polished machinery of language and the thought habits that words and grammatical structure enable. We become equipped to take a step back from the pressing flow of the moment and pursue the private traffic of images and plans that make us thinking, self-aware, individuals. And while it may be a fact that we use our mental independence from the moment mostly to apply a social filter to our actions, we are still undetermined - free to consider options - in a way that animals are not.

not forgetting the neurology of freewill

Of course, none of what has been said so far is to deny that there is not a neurology of will which needs to be accounted for. The bifold model of mind only says that culture exploits certain capabilities to be found in the biology of the brain.

Plainly the body has its appetites and these serve as a spur for action. The feeling, or rather noticing (Wall, 1999), of these bodily pangs is a complex business, involving both low level brain areas such as the hypothalamus and high level cortex regions such as the anterior insula, ventromedial prefrontal and anterior cingulate cortex (Damasio, 1994). And as well as relatively coarse-grained emotions like hunger, thirst, lust, anxiety, and pain, the body makes a constant series of fine-grain adjustments to match its arousal state to the events of each passing moment. Automatic changes in blood pressure, noradrenaline release, sweating, and many other homeostatic adaptations precisely calibrate the body and brain to the expected level of action (Sokolov, 1963; Lynn, 1966). So the brain cannot help but generate urges, impulses, and even generalised states of arousal or relaxation. The bifold part of the story is that we then wrap a framework of social thinking around this biology in an attempt to negotiate the expression of any urges.

The raw animal brain also has an obvious capacity for making plans and controlling the execution of those plans. In fact, the most convincing current models of brain processing are those which stress that the brain exists for generating action rather than the passive contemplation of sensory data (James, 1981; Luria, 1973; Baars, 1988). The story is that whatever falls within the eye of attention automatically begins to unlock ideas about potential responses. If we notice a door handle, we can not help but begin to feel the urge to grasp and turn it. Much recent work has shown how the frontal lobes, in conjunction with sub-cortical areas such as the basal ganglia and cerebellum, are organised to decompose the current focus of attention into a fully expressed response (Passingham, 1993). And an important part of this machinery is the ability to hold a plan in mind - to keep an intention flying in working memory - before deciding the moment for its release (Williams and Goldman-Rakic, 1995).

Neuroscience also has much to say about why many of our actions seem unwilled - that is, not the result of focal planning but released automatically or spontaneously. The rousing of a global intention creates a dominant context (Baars, 1988) in which any habitual actions compatible with that context will simply be released. So for example, if our explicit intention is to enter a room, then this implicitly permits the many component acts needed to get us into that room, such as taking steps down a corridor and reaching for a door knob. The way the basal ganglia learns to slot in such component skills has recently been described in some detail (Graybiel, 1998).

A renewed interest in reafference messages (Wolpert et al, 1995) - the communication of an intention to act to the sensory cortex so that it can correct sensory impressions for self-generated movements - is also helping to explain how we know what we are about to do, as when we suddenly become conscious of a 'spontaneous' urge to flex a finger (Libet, 1985). In the now famous Libet experiment, verbal instructions to the subjects set up the initial dominant context (a clear mental image of the kind of act they should perform, along with the requirement that the act should happen with no pause for sub-vocal deliberation). Thus primed, the brain could simply permit the highly-habitualised act of flexing a finger to take place (and even the ability to generate a convincingly pseudo-random timing of the action presumably depended on prior learning - at least, children do not seem so good at such a trick). The mystery of what constituted the 'becoming conscious' of the actual urge is neatly explained by the broadcasting of the reafference image necessary to stop the subjects from feeling that the finger flexing was being imposed by some outside agency (Spence et al, 1997). It is noteworthy that James (p. 1111, 1981) with his usual clarity talked about consciousness of an intention being the combination of an awareness of a fiat (a permissive context) and then of the anticipatory image of the sensorial consequences.

A further important point is that the idea of a brain centre that does our 'willing', or even a 'willing' circuit is highly misleading. A good argument can be made that the brain has a general circuit for focusing its output plans - for taking a mass of possible responses to the central aspect of a moment and turning it into a single, explicitly represented intention (Passingham, 1993, McCrone, 1999). And that this 'endogenous' circuit is also sensitive to 'exogenous' interruptions (Posner and Rothbart, 1994). That is we can plan to reach for a cup of coffee but be interrupted, our hand frozen in mid-air by a 'plan flattening' dopamine flush in the nucleus accumbens (Gray et al, 1991), if a car happens to backfire unexpectedly in the street outside.

But while the brain does have areas that seem to play a key role in the planning and control of action, attempts to draw up circuit diagrams of this output apparatus end up having to include most of the brain anyway (Gray, 1995). And when it is considered that the entire frontal cortex sheet should be considered as a three legged hierarchy (Passingham, 1993) for decomposing the focus of each moment into a concrete output plan (with separate streams of decomposition to provide an appropriate motor, attentional and verbal response), and that the intention to respond is then broadcast as a warning anticipatory image to all relevant corners of the sensory cortex, it can be seen that indeed the whole brain becomes drawn into service in organising an act of willing. So volition, even in animals, is not the responsibility of some tacked-on brain module but a consequence of the fact that the entire brain is shaped by the need to come up with an optimal behavioural response to each moment. Simply put, volition, not contemplation, is what brains evolved to do.

conclusion

There is no shortage of neuroscience to explain the brain's rich ability to find the appropriate focus of attention within each moment and mobilise its stored knowledge to deal with this focus. But the difference between animals and humans is that we can make our brains react to imaginary contexts - we can do things like think about it would be like to go into a room and find a handbag left on a chair. And we also habitually bring a socially-expanded sense of context to bear on each moment - our social training will make us imagine what people would say if they saw us rifling through that handbag, even if we just happened to be sneaking a peak out of curiosity.

In the end, it is the individual brain that has to organise the willing of any act. But humans have created an extra, socialised, level of filtering that all brain planning must pass through before the possible becomes translated into the actual.

footnotes

[1] The term bifold (McCrone, 1993) is used to denote the general idea that the human mind is the product of two kinds of evolutionary process - the biological and the cultural. But it is also used to indicate a specific subset of such theories - ones like those of Vygotsky, Luria and Dennett in particular that share certain key features. So, for instance, unlike some recent memetic theories of consciousness (Blackmore, 1999), the internalisation of language rather than some general brain faculty of 'imitation' is seen as the mechanism by which cultural patterns of thought come to have a hold on our minds. The memes encouraged by a culture are also expected to have a reasonably clear social utility and not to be just ones that flourish simply because they are good at flourishing.

Other key bifold themes that are perhaps not so well developed in much of the recent Vygotskian-flavoured literature (Dewar, 1989; Donald, 1991; Deacon, 1997) include the argument that the animal mind is different because it is locked into the present, that the developmental plasticity of the brain means that language and culture can literally reshape the brain's circuits, that the relationship between thought and language is one of a scaffolding device (Clark, 1998) or cognitive crutch (Dennett, 1998) with words being used to probe for meaning, and that thought and inner speech eventually become largely fused within the adult mind so that just the 'inkling' of a speech act can do the thought-driving work of a fully-expressed speech act (McCrone, 1999).

The claim that speech and speech-enabled habits of thought are things that 'grow into the brain' during development also places the bifold view apart from a currently popular vein of theorising within evolutionary psychology which sees language as being central to the special mental abilities of humans, but treat it and even many of the associated abilities (like Theory of Mind) as genetically-evolved brain modules (Carruthers, 1996; Mithen, 1996; Bickerton, 1995).

Equally, the bifold model - stressing as it does an interaction between culturally-evolved thought habits and the natural thinking and reasoning powers of the animal brain - is distinct from theories in which the pendulum swings too far to the cultural side, so for example denying thought to infants or animals (Wittgenstein, 1922; Davidson, 1984), or claiming that learnt grammatical structures have a radical effect on even perception (Whorf, 1956).

[2] It is plain that the Chomskian view of grammar as an innate faculty is still the dominant view within much of science - witness the continuing approbation of Chomskian-minded thinkers like Pinker (1994). And certainly there is no intent to deny that the human brain and vocal tract show at least some genetic adaptations for articulate speech (McCrone, 1990, 1999). But the rise of connectionist models of computation (Elman et al, 1996; Clark and Thornton, 1997) have made it easier to see how brains can 'extract' much of the structure of grammar from apparently impoverished input, while more careful thought about what the Chomskian view requires in terms of genetic coding (Mueller, 1996) have undermined the innateness hypothesis at an even more fundamental level - simply put, the genome does not have room to code for the precise placement of brain circuitry in the way a Chomskian approach would seem to demand.

[3] Luria (1973) was able to mount a subtle argument about the way the regulatory structure of language grows into the human brain - particularly the prefrontal cortex - based largely just on EEG and lesion data, so there is nothing especially modern about the idea. It should also be noted that while Dennett (1991) has recently argued a similar case, he undoubtedly goes too far in claiming that language and culture act to install a serial machine in what is essentially parallel hardware, so creating the illusion of a virtual user dwelling amidst the multiple drafts of possible experience. While it is said here that culture does indeed install a strong sense of self and a habit of self-monitoring in our heads, it is an important feature of the 'raw' brain that it forms an attention-based angle into each passing moment - it views life from a series of perspectives, forming each new mental 'frames' at roughly the rate of about one every half second - and so is already both a serial and a parallel machine (McCrone, 1999).

[4] There is also an 'Eastern' mythology of the will, of course - one that with Asian culture's emphasis on the social rather than the individual puts the situation almost exactly the other way round. The Eastern goal, as made explicit in religious practices such as Zen meditation, becomes not to express the individual will but to become utterly passive, to release all sense of willed action and allow the self to dissolve back into the universal consciousness.

[5] Much could probably be made of the fact that the embodiment of Christianity in the single figure of Christ, together with the insistence on a single God, itself presents the paradigmatic model of the 'willed individual', above any social setting and so faced with the 'problem' of moral choice.

[6] Even accounts of Hell's Angels that play up their outlaw status (Thompson, 1966) cannot help but also emphasise how tightly controlled they are by the social construction of what it is to be a Hell's Angel. Or indeed, how ordinary is their emotional role play when they are not being called upon to live up to some of the more specialised demands of their well-defined cultural code.

[7] In effect, this would be a continuation of the same trick that worked for Christianity - the detaching of individuals from a local social context so as to control their behaviour through an allegiance to a set of more abstract, and so portable, social principles. In tribal societies, there is little possibility of escaping the watchful eye of a peer group and so less need to internalise a set of moral principles. But the price of this would be that there was also less room for social innovation. The Enlightenment and the rise of a mercantile economy brought about its own detachment from an existing moral framework - one based on theology and feudal rule - so setting individuals free (relatively speaking of course) within a new framework based on a tolerance for social experimentation and economic entrepreneurship. In this sense, the rationalists and the romantics were pulling in the same direction. The more individuals were made to assume the responsibility for writing their own script in life, the more effectively they exported the expansionist memes of Western culture - and, of course, the more acutely aware those individuals became of their status as 'free agents',

[8] To be fair to Trefil and other chaos theorists, their appeal to chaos as an 'out' for freewill is typically made in only a weak sense. Whereas for quantum mysterians, the need for an out for freewill is often the central plank in their arguments.

references

Aitchison, J. (1994) Words in the Mind (Oxford: Basil Blackwell).

Baars, B. (1988) A Cognitive Theory of Consciousness (Cambridge: Cambridge University Press).

Bain, A. (1977) The Senses and the Intellect and The Emotions and the Will, edited by Robinson, D. (Washington, DC: University Publications of America).

Bartlett, F. (1932) Remembering: A Study in Experimental and Social Psychology (Cambridge: Cambridge University Press).

Berger, P. and Luckmann, T. (1979) The Social Construction of Reality (London: Penguin).

Bickerton, D. (1995) Language and Human Behaviour (London: University College London Press).

Boesch, C. (1990) 'First hunters of the forest', New Scientist, 19 May, pp. 38-41.

Blackmore, S. (1999) The Meme Machine (Oxford: Oxford University Press).

Bloom, P. (1970) Language Development (Cambridge, Massachusetts: MIT Press).

Burr, V. (1995) An Introduction to Social Constructionism (London: Routledge).

Buruma, I. (1984) A Japanese Mirror (London: Jonathan Cape).

Carruthers, P. (1996) Language, Thought and Consciousness (Cambridge: Cambridge University Press).

Carruthers, P. and Boucher, J. (1998) Language and Thought (Cambridge: Cambridge University Press).

Chomsky, N. (1965) Aspects of the Theory of Syntax (Cambridge, Massachusetts: MIT Press).

Clark, A. (1998) 'Magic words: how language aurments human computation', in Language and Thought, edited by Carruthers, P. and Boucher, J. (Cambridge: Cambridge University Press).

Clark, A. and Thornton, C. (1997) 'Trading spaces: computation, representation and the limits of uniformed learning', Behavioral and Brain Sciences, 20, pp. 57-92.

Condillac, E.B.de (1930) Treatise on the Sensations, translated by Carr, G. (Los Angeles, California: University of California Press).

Conway, M. (1990) Autobiographical Memory (Milton Keynes, Buckingham: Open University Press).

Cooley, C.H. (1912) Human Nature and the Social Order (New York: Charles Scribner).

Coulter, J. (1979) The Social Construction of Mind (London: Macmillan).

Damasio, A.R. (1994) Descartes' Error (New York: Putnam).

Damasio, H., Grabowski, T.J., Tranel, D., Hichwa, R.D. and Damasio, A.R. (1996) 'A neural basis for lexical retrieval', Nature, 380, pp. 499-505.

Danziger, K. (1997) Naming the Mind (London: Sage).

Davidson, D. (1984) Inquiries into Truth and Interpretation (Oxford: Clarendon Press).

Dawkins, R. (1993) 'Viruses of the mind', in Dennett and his Critics, edited by Dahlbom, B. (Oxford: Basil Blackwell).

Deacon, T. (1997) The Symbolic Species (London: Allen Lane, Penguin).

Dennett, D. (1991) Consciousness Explained (New York: Little, Brown).

Dennett, D. (1998) 'Reflections on language and mind', in Language and Thought, edited by Carruthers, P. and Boucher, J. (Cambridge: Cambridge University Press).

Dewart, L. (1989) Evolution of Consciousness (Toronto: University of Toronto Press).

Diaz, R.M. and Berk, L.E. (1992) Private Speech (Hillsdale, New Jersey: Lawrence Erlbaum).

Dodds, E.R. (1963) The Greeks and the Irrational (Berkeley, California: University of California Press).

Donald, M. (1991) Origins of the Modern Mind (Cambridge, Massachusetts: Harvard University Press).

Elman, J., Bates, E., Johnson, M., Karmiloff-Smith, A., Parisi, D. and Plunkett, K. (1996) Rethinking Innateness (Cambridge, Massachusetts: MIT Press).

Foucault, M. (1990) The History of Sexuality Vol 3: The Care of the Self (London: Penguin).

Gergen, K.J. and Davis, K.E. (1985) The Social Construction of the Person (New York: Springer-Verlag).

Goffman, E. (1969) The Presentation of Self in Everyday Life (London: Penguin).

Gray, J.A., Feldon, J., Rawlins, J.N.P., Hemsley, D.R. and Smith, A.D. (1991) 'The neuropsychology of schizophrenia', Behavioral and Brain Sciences, 14, pp.1-84.

Gray, J.A. (1995) 'The contents of consciousness: a neuropsychological conjecture', Behavioral and Brain Sciences, 18, pp. 659-722.

Graybiel, A.M. (1998) 'The basal ganglia and chunking of action repertoires', Neurobiology of Learning and Memory, 70, pp. 119-136.

Greenberg, J. (1963) Universals of Language (Cambridge, Massachusetts: MIT Press).

Hampson, N. (1968) The Enlightenment (Harmondsworth, Middlesex: Pelican).

Haren, M. (1985) Medieval Thought (London: Macmillan).

Harré, R. (1983) Personal Being (Oxford: Basil Blackwell)

Harré, R. (1986) The Social Construction of Emotions (Oxford: Basil Blackwell).

Harré, R. and Gillett, G. (1994) The Discursive Mind (London: Sage).

Hobbes, T. (1951) Leviathan (Oxford: Basil Blackwell).

Hume, D. (1967) A Treatise of Human Nature, edited by Selby-Bigge, L. (Oxford: Clarendon Press).

Jahoda, G. (1992) Crossroads Between Culture and Mind (Hemel Hempstead, Hertfordshire: Harvester Wheatsheaf).

Jackendoff, R. (1996) 'How language helps us think', Pragmatics and Cognition, 4, pp. 1-34.

James, W. (1981) The Principles of Psychology (Cambridge, Massachusetts: Harvard University Press).

Johanson, D. and Edey, M. (1982) Lucy: The Beginnings of Humankind (New York: Warner Books).

Kety, S.S. and Schmidt, C.E. (1946) 'The determination of cerebral blood flow in man by the use of nitrous oxide in low concentrations', American Journal of Physiology, 143, pp. 53-56.

Kimura, D. (1979) 'Neuromotor mechanisms in the evolution of human communication', in Neurobiology of Social Communication in Primates, edited by Steklis, H. and Raleigh, M. (New York: Academic Press).

Lane, H. (1976) The Wild Boy of Aveyron (Cambridge, Massachusetts: Harvard University Press).

Lenneberg, E. (1967) Biological Foundations of Language (New York: Wiley).

Libet, B. (1985) 'Unconscious cerebral initiative and the role of conscious will in voluntary action', Behavioral and Brain Sciences, 8, pp. 529-566.

Lieberman, P. (1991) Uniquely Human (Cambridge, Massachusetts: Harvard University Press).

Locke, J. (1975) An Essay Concerning Human Understanding, edited by Nidditch, P. (Oxford: Clarendon Press).

Luria, A. (1973) The Working Brain (London: Penguin).

Luria, A. (1976) Cognitive Development (Cambridge, Massachusetts: Harvard University Press).

Luria, A. (1982) Language and Cognition, edited by Wertsch, J. (Chichester, Sussex: John Wiley).

Luria, A. and Yudovich, F. (1956) Speech and the Development of Mental Processes in the Child (London: Penguin).

Lutz, C. (1986) 'The domain of emotion words on Ifaluk', in The Social Construction of Emotions, edited by Harré, R. (Oxford: Basil Blackwell).

Lutz, C. (1988) Unnatural Emotions (Chicago: University of Chicago Press).

Lynn, R. (1966) Attention, Arousal and the Orientation Response (Oxford: Pergamon Press).

MacNamara, J. (1977) Language, Learning and Thought (New York: Academic Press).

McCrone, J. (1990) The Ape That Spoke (London: Macmillan).

McCrone, J. (1993) The Myth of Irrationality (London: Macmillan)

McCrone, J. (1999) Going Inside (London: Faber and Faber).

Martin, A., Haxby, J.V., Lalonde, F.M., Wiggs, C.L. & Ungerleider, L.G. (1995) 'Discrete cortical regions associated with knowledge of color and knowledge of action', Science, 270, pp. 102-5.

Mason, J.H. (1979) The Indespensable Rousseau (London: Quartet).

Mead, G.H. (1934) Mind, Self and Society (Chicago: University of Chicago Press).

Mithen, S. (1996) The Prehistory of the Mind (London: Thames and Hudson).

Morsbach, H. and Tyler, W.J. (1986) 'A Japanese emotion: Amae', in The Social Construction of Emotions, edited by Harré, R. (Oxford: Basil Blackwell).

Morton, A. (1980) Frames of Mind (Oxford: Oxford University Press).

Mueller, R-A. (1996) 'Innateness, autonomy, universality? Neurobiological approaches to language', Behavioral and Brain Sciences, 19, pp. 611-675.

Müller, M. (1888) The Science of Thought (Chicago: Open Court).

Neisser, U. (1967) Cognitive psychology (New York: Appleton-Century-Crofts).

Neisser, U. (1976) Cognition and Reality (New York: WH Freeman, 1976).

Nietzsche, F. (1961) Thus Spake Zarathustra, translated by Hollingdale, R.J. (London: Penguin, 1961).

Passingham, R. (1993) The Frontal Lobes and Voluntary Action (Oxford: Oxford University Press).

Pinker, S. (1994) The Language Instinct (New York: William Morrow).

Posner, M.I. and Rothbart, M.K. (1994) 'Constructing neuronal theories of mind', in Large-Scale Neuronal Theories of the Brain, edited by Koch, C. and Davis, J. (Cambridge, Massachusetts: MIT Press).

Pulvermüller, F. (1999) ' Words in the brain's language', Behavioral and Brain Sciences, 22, pp. 253-336.

Rao, R.P.N., and Ballard, D.H. (1999) 'Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects', Nature Neuroscience, 2, pp. 79-87.

Ryle, G. (1949) The Concept of Mind (London: Hutchinson).

Saffran, J.R., Aslin, R.N. and Newport, E.L. (1996) 'Statistical learning by 8-month-old infants', Science, 274, pp. 1926-8.

de Saussure, F. (1959) Course in General Linguistics (New York: McGraw-Hill).

Service, E.R. (1962) Primitive Social Organisation (New York: Random House).

Simon, B. (1978) The Classical Roots of Modern Psychiatry (Ithaca, New York: Cornell University Press).

Singh, J. and Zingg, R. (1941) Wolf Children and Feral Man (New York: Harper).

Sluckin, W. and Herbert, M. (1986) Parental Behaviour (Oxford: Basil Blackwell).

Sokolov, A.N. (1972) Inner Speech and Thought (New York: Plenum Press).

Sokolov, E. (1963) Perception and the Conditioned Reflex (New York: Macmillan).

Sorabji, R. (1993) Animal Minds and Human Morals (Ithaca, New York: Cornell University Press).

Spence, S.A., Brooks, D.J., Hirsch, S.R., Liddle, P.F., Meehan, J., Grasby, P.M. (1997), `A PET study of voluntary movement in schizophrenic patients experiencing passivity phenomena (delusions of alien control)', Brain, 120, pp. 1997-2011.

Thompson, H.S. (1966) Hell's Angels (New York: Random House).

Trefil, J. (1997) Are We Unique? (New York: Wiley).

Vygotsky, L. (1986) Thought and Language, edited by Kozulin, A. (Cambridge, Massachusetts: MIT Press).

Vygotsky, L. (1978) Mind in Society, edited by Cole, S. (Cambridge, Massachusetts: Harvard University Press).

Vygtotsky, L. and Luria, A. (1994) 'Tool and symbol in child development', in The Vygotsky Reader, edited by van der Veer, R. and Valsiner, J. (Oxford: Basil Blackwell).

de Waal, F. (1982) Chimpanzee Politics (London: Jonathan Cape).

Walker, S. (1983) Animal Thought (London: Routledge and Kegan Paul),

Wall, P. (1999) Pain: The Science of Suffering (London: Weidenfeld and Nicolson).

Wertsch, J. (1991) Voices of the Mind (Cambridge, Massachusetts: Harvard University Press).

Whitten, W.B. and Leonard, J.M. (1981) 'Directed search through autobiographical memory', Memory and Cognition, 9, pp. 566-79.

Wittgenstein, L. (1922) Tractatus Logico-Philosophicus (London: Routledge and Kegan Paul).

Whorf, B. (1956) Language, Thought and Reality (Cambridge, Massachusetts: MIT Press).

Williams, G.V. and Goldman-Rakic, P.S. (1995) 'Modulation of memory fields by dopamine D1 receptors in prefrontal cortex', Nature, 376, pp. 572-5.

Wittgenstein, L. (1922) Tractatus Logico-Philosophicus (London: Routledge and Kegan Paul).

Wittgenstein, L. (1976) Philosophical Investigations (Oxford: Basil Blackwell).

Wolpert, D.M., Ghahramani, Z. and Jordan, M.I. (1995) 'An internal model for sensorimotor integration', Science, 269, pp.1880-2.

Zivin, G. (1979) The Development of Self-Regulation Through Private Speech (New York: John Wiley).

Zohar, D. (1990) The Quantum Self (New York: William Morrow).

home> back to readings