readings> anticipation 

The big sticking point with Benjamin Libet's results was that half a second seemed such a long time. If consciousness was the result of activity in the brain, then everyone knew it had to lag reality simply because of the time it took for signals to travel across its maze of billions of connections. But while most researchers could stomach a "barely noticeable" delay of, say, a tenth of a second, Libet's half a second posed too many uncomfortable questions - especially for the standard cognitive science view of brain processing.

Again, the answer had already been discovered by 19th Century psychologists. But the rise of sports psychology in the 1980s brought the matter into sharp focus. A tennis player or baseball batter has to contact the ball within a window measured in milliseconds and millimetres. The only thing that makes such accuracy possible is anticipation. And as a few cognitive psychologists such as Ulric Neisser and Bernard Baars realised, anticipation must lead the way into every moment of consciousness.

We begin each "perceptual cycle" with a set of plans and expectations that allow us to deal with the moment smoothly and skillfully. Consciousness does not lag. Instead it grades from strong prediction to settled resolution. But the question was how the brain might actually generate states of expectation? A dynamic view of the brain's pathways gave an obvious answer - and again, experiments carried out in Robert Desimone's lab held vital clues.

the reaction time puzzle

Concentrate. Keep your eye on the ball. Watch it right on to the face of the racket. How many novice tennis players have vainly struggled to follow such advice? A beginner hitting another air shot would find it all too easy to believe Benjamin Libet when he says our awareness of the world comes half of second late. Yet if Libet is right, even professional tennis players must experience the same processing gap.

Given that on the men's circuit, serves are regularly banged down at 120 miles per hour and so take less than half a second to travel the length of the court, this means that the last time many players might have conscious level information about the location of a ball would be while it was still in their opponent's hand!

Sports psychology is one of the few areas in the mind sciences where researchers are forced to confront the issue of mental processing times. The field has only really existed since the 1970s, paid for by the growth of professional sports. Yet the practical problem of helping athletes hit balls or quicken their reflexes has made sports psychologists ask the kinds of questions that the rest of psychology has side-stepped. Researchers have had to get inside the conscious moment to discover why some people are slow and unco-ordinated, while others—the gifted few—seem able to conjure with time.

If asked what makes someone a sporting star, the natural assumption is that the person must benefit from some basic speed advantage. They must have quicksilver reflexes or a faster-reacting eye. Or perhaps the motor control centres of their brains turn round decisions much sooner. So it has been a huge surprise for sports psychologists to find that top athletes score virtually the same as the average person in reaction time tests, tests of visual acuity, or any other raw measure of mental processing ability.

When sat at a lab bench and asked to hit a button as soon as they see a light flash, gifted baseball or tennis players might have fractionally faster reactions. They might average 200 milliseconds compared to an average of 220 milliseconds for a control group of ordinary people. But such differences are too small to explain a huge gulf in athletic ability.

And besides, sports psychologists believe the 20 millisecond advantage is probably due to other factors such as the athletes having the muscles to move their hands a bit faster. Or their competitive natures might make them concentrate harder, preventing the occasional lapses that would bring their average down over 50 or 60 trials.

In short, the evidence from reaction time tests is that the period needed to form an awareness of a sensory stimulus — or rather, as the work of Libet suggests, an early subconscious level detection — seems fairly standard for the human brain. If there is variation, it is surprisingly slight.

A few sports psychologists speculated that the apparent quickness of an athlete's reflexes might be something which only showed on the playing field. Perhaps with training, people would hone the pathways that dealt with seeing balls or dodging a lunging opponent, allowing them to cut normal sensory development times on these particular skills. However careful experimentation ruled out even this possibility.

 One of the best known studies was done in 1987 by Peter McLeod, a researcher then at the Applied Psychology Unit in Cambridge, England. This famous laboratory, formed during World War Two to help with pilot training and instrumentation design, has long been a bastion of psychophysics research. Following in the lab's tradition of practical experimentation, McLeod got together a group of cricket players, all internationals from the England team, and filmed them in slow motion to discover how they coped with various kinds of deliveries from a bowling machine.

Like other ball games, the dimensions of a cricket pitch are not accidental but have evolved to test a player's reactions. The distance between wickets has been dictated by the speed with which a bowler can hurl a ball, so that hitting the ball becomes difficult but not impossible. In top class cricket, a fast bowler can send down deliveries at 90 miles per hour, meaning the ball will reach the batsman in 440 milliseconds. And even a medium-paced delivery of 60 miles per hour will still cover the distance between the bowler's hand and batter's crease in just 660 milliseconds.

However, the time constraints imposed on a batsman are actually much tighter than these figures suggest. The raised seam of a cricket ball means it can kick sideways as it bounces off the ground in front of a player. Cricket balls may also develop a late swing just before impact. By roughing one face of the ball to increase drag and then angling the seam against the angle of the flight, a bowler can make the ball bend just as it begins to slow in the air. The result is that batsmen will often find themselves having to make hurried adjustments to a ball doing strange things just a few feet away from them.

 The batsman's choice of shot also brings its own time constraints. The safest shot to hit is straight down the line of an incoming ball. As long as the delivery does not move too far off-pitch, it should eventually run smack into the face of the bat. But more attacking shots must be made with a hook or a cut across the ball's flight. In these cases, even a very slow delivery of 45 miles per hour will give a batsman just a four millisecond margin of error for getting his bat into the right place at the right moment. A few thousandths of a second early or late with the swing and he will find himself swiping at thin air.

To discover how cricketers manage to fend off hostile bowling often for hours at a time, McLeod scrutinised the slow motion footage of his group for clues. First he found that the batsmen began their strokes in a highly stereotyped way. Each kind of shot, of course, required a somewhat different preparation in terms of placement of the feet or backlift of the bat. But once a player had decided on a particular stroke, such as a hook, the body would turn and the bat would be taken back to exactly the same position as if the action ran along a fixed groove. There was a metronomic precision to the preparation that contrasted greatly with that of even a group of reasonable, club-standard, players.

Next McLeod looked at how the top batsmen dealt with the unpredictability of the actual delivery. Using a ball machine, McLeod fixed the speed and angle of each delivery. Then to make the bounce testing, he laid a bumpy bit of matting just in front of the batsmen, ensuring the ball would take the occasional wicked kick. To hit such a ball, a player would have to make a hasty, mid-course, adjustment in the trajectory of his swing.

Poring over the record of hundreds of strokes, McLeod found that the batsmen never reacted immediately to the changing flight of the ball. Instead, their swings would continue going straight down the original line for 200 milliseconds before suddenly they veered sideways to make the correction necessary to intercept the ball on its new path. What was surprising was not just how late, but also how sharply, this adjustment was made.

McLeod had thought there might be a smoother change with the players slowly bringing their bats round as they watched the ball start to kick away. But instead the batsmen leapt straight from one path to another, as if they were jumping tracks having just completed a lengthy set of re-calculations.

McLeod also found it remarkable how the 200 millisecond lag in reaction time was such a constant figure for all the batsmen involved:

"It didn't matter whether the correction was big or a small, all the times bunched at around 200 milliseconds. I never saw a hint of any change in less than 190 milliseconds. I videoed some real cricket matches off the TV and checked the pictures with a ruler to check it wasn't just something to do with the set-up I was using in the laboratory. The times were just the same. If the ball did something nasty less than 200 milliseconds away from a batsman, he would act as if he never saw it."

the habit of prediction

McLeod's results are graphic evidence that brain processing lags are real. Libet had suggested that full consciousness needs about half a second to develop. But McLeod's experiment — and the many hundreds of other sports studies like it — demonstrate that even rapid preconscious processing takes up an unexpected length of time. The very quickest reaction of which a human is capable is to the crack of a starter's pistol that begins a race. The bang of a gun is a simple event to process. It is not like judging the changing flight of a cricket ball in which there must be at least some time taken up in seeing the deviation begin to happen. And crouched in the blocks, a sprinter's response is already planned. There is no need to spend time calculating trajectories.

Yet despite such a low processing overhead, it still takes more than a tenth of a second for a sprinter to respond. Indeed, this time is so in-built that in international competition, pressure sensors in the foot blocks are used to rule any movement in under 120 milliseconds an automatic false start. Brain processing has an absolute limit and this has become a fact enshrined in the rules of modern sport.

So sports psychologists were faced with the problem that the emergence of awareness was an incompressible factor. You could not be better by being faster mentally. Worse still, the brain did not react to anything in less than a tenth of a second. So regardless of what you believed about Libet's half second claim for full consciousness, there was a puzzle over how the brain managed to deal with the last few instants of a ball's flight or a late lunge by an opponent, in any game. Where could the advantage of a top class athlete lie?

Part of the answer is that the physically gifted must have special motor skills. When an international-standard cricket player sees a delivery turn, or a tennis professional is faced with a ball skidding low off the Wimbledon grass, something about their balance and co-ordination allows them to organise a tidier, more fluid, response than the average person. There is an economy and a precision that buys them time.

Electrical recording of the muscles of top athletes have shown this to be literally true. Electrodes were tapped to the arms of both novice and expert players in several sports to record the EEG crackle of the messages being sent to their muscle fibres. It was found that whereas the novices produced a barrage of nerve signals, straining every muscle to bring about a movement and often causing opposing muscle groups to wrestle against each other, the limbs of the experts moved almost in silence. Their brains appeared to know exactly which muscles to pull to get the job done, making their movement silkily sure.

To an extent, such economy always comes with training. The brain can learn efficiency. But sports psychologists believe that some people are lucky and have brains that are better at honing down the motor template needed to execute an action. During the moment, their brains might not work any faster, however the gifted are more receptive to the practice that will eventually allow them to work smarter.

Yet there had to be more to the story of how people manage to execute skilled acts when their brains lag behind the moment. And as sports psychologists pored over their slow motion replays, checking to see when good players first started preparing for a stroke or moment of action, they soon realised that anticipation must hold the key. The brains of the best were making earlier and more accurate predictions about what was about to happen and it was this that was carrying them through the moment, allowing them to behave as if they could feel the ball right on their rackets when making a feathery drop shot, or to take a delicate, last second, decision to glance away a turning cricket delivery.

An easy example to study was the return of serve in tennis. Facing a fast serve, players have barely 400 milliseconds in which to see whether the ball is headed for their forehand or backhand and then to make any late adjustments for unexpected skids or jumps of the ball off the court surface. Given that simply turning the shoulders and lifting the racket back occupies a third of a second, and that it takes about half a second to reach wide for a ball, anticipation has to have a role. Even if awareness were actually instant, it still would not be fast enough to get a player across the court in time.

Tests were carried out in which novice and professional players were shown film clips of a person serving. The film was stopped at different stages of the server's action and the subjects were then asked to guess whether the ball was going to land on their forehand, backhand, or smack down the middle. Neither the novices or experts had any trouble predicting where the ball would go even after seeing just 120 milliseconds of flight. This showed that they could all anticipate. They did not have to see the ball land.

But the significant finding was that the professionals were able to guess the direction of a serve with fair accuracy even if the film was halted 40 milliseconds before the ball was struck. The seasoned players were gleaning hints from the way the server was shaping up during the ball toss and not having to wait to sample the actual flight of the ball. 

 Dozens of other such experiments have since confirmed that sports players buy time by learning to read the body language of their opponents. Bruce Abernethy, a sports psychologist at the University of Queensland in Australia, has shown that top badminton players can tell a lot from seeing an opponent's chest and shoulders begin to move a full 170 milliseconds before the shuttlecock is struck. Likewise, Abernethy filmed cricket batsmen and found that they were stepping forward in anticipation of a short-pitched delivery some 100 milliseconds before the bowler released the ball.

Another key point about this habit of prediction was that it was never an all-or-nothing affair. It was not tied to one particular moment in an opponent's ball toss or wind-up, but instead took the form of a dynamically narrowing cone of probability. Each player began with broad expectations, usually dictated by their knowledge of the capabilities of their opponents or thoughts about what their opponents might need to achieve due to the state of the game. Then watching their opponents shape up would start to give them general hints about how to prepare—perhaps enough for a cricketer to decide whether to step on to the front or back foot, or a tennis player to begin swivelling left or right.

But the guessing games never stopped. Tests showed that seeing the first 100 milliseconds, then the second 100 milliseconds, of the ball's flight would lead to a steadily more accurate idea of what to expect. The skilled players were refining their state of expectancy right until about 200 milliseconds before contact, by which time, as McLeod's experiment showed, the brain could no longer physically react. If something happened to a ball that late, even the most accomplished player would swing and miss.

Frustratingly for the sports psychologists, who obviously wanted to be able to teach the secrets of good anticipation, none of the top players could explain what it was they were actually looking at to get their clues. When questioned, they said they did not feel they were watching anything in particular. Indeed, most said they had not even been aware they were making guesses ahead of time. They believed they had simply been concentrating hard and making sure they watched the ball right on to their bat or racket, so were conscious of the shots pretty much as they happened.

great expectations

Libet's half second results say that awareness is smeared out after the event. First there is a   preconscious phase of processing and then some kind of conscious level resolution. But anticipation stretches our ideas about brain processing in the opposite direction. It says that predictions ease our passage into the moment. In some sense, we are conscious ahead of time. We do not notice the large gap in our awareness because our brains move seamlessly from a state of intelligent forecast to a state of confirmed sensory expectations.

Plainly, the habit of prediction is not something reserved just for special case situations like hitting a tennis ball. Every moment is processed within some prior context — a framework of hopes and fears, intentions and expectations, memories and goals. These form the backdrop against which the events of the moment will be judged. And the more we get right about the coming moment, the less work there will be to do during it. If the thinking has already been done, most of our actions can be carried out on automatic pilot, leaving the brain to focus its attention on whatever turns out to be truly surprising, novel or significant about an instant.

A little introspection makes it clear that even our most trivial activities come freighted with a dynamically tapering cone of anticipations — some which may be consciously explicit, but many which are either dimly conscious or apparently even unconscious and implicit. For example, when we reach out for the gleaming brass handle of a door, our brain will not only be predicting the instant of contact and the correct angle at which to hold our hand, but it will also be second-guessing how the handle should actually feel as we touch it. It will be predicting the sensory parts of the experience.

The fact that we were riding such a wave of predictions would soon be brought home to us if we were to reach out and discover that the handle was made of something sticky or mushy. At some subconscious level, we would already have formed the expectation of touching cold, unyielding metal. Indeed, if someone the other side happened to snatch open the door at the moment our fingers were about to close on the handle, we might even catch a ghostly impression of what we were just about to feel with our hands. We would experience the fleeting edge of our own sensory forecast.

It is almost impossible to imagine a moment without a context. There is always something about what has just happened that predicts what is likely to happen — or not happen — next. Even sitting in our homes, loafing in a comfy chair and apparently not thinking about or doing anything in particular, we would still be deeply embedded in a set of expectations. There would be a mental backdrop that told us what kind of events were most probable.

So we might be half-expecting our partner or cat to stroll through the door, but not our boss from work or an armadillo. Likewise, we might half-expect the phone to ring or a breeze to flutter the curtains, but not the walls to change colour or the carpet suddenly to start making snide personal remarks about us. We would feel orientated to our surroundings by a carefully graded sense of possibility.

Of course, life can catch us out. The unpredictable does sometimes occur. As we lounge in our chair, an armadillo might wander through the door, or more plausibly, a burglar. Or perhaps something which has become familiar might stop. The neighbours may turn down a droning radio, or the dull hum of our fridge could cut out.

At such moments, the mismatch between our state of expectation and the turn of events will often cause a baffled double-take. We will find ourselves floundering for an instant, struggling to reorientate ourselves to the new situation — needing perhaps as long as Libet's half second to take events in and get back on track. Yet the very fact that we can feel caught out simply confirms we must have had a set of expectations in the first place. Surprises have to have something to contradict.

The question then is how does the brain generate a state of anticipation? From a computational point of view, anticipation looks a very difficult ability to explain. A computer works by fetching data and then executing instructions. It is a step-by-step style of processing in which something happens first — the data arrives — and then the system sets about trying to make sense of it.

Of course, a clever designer could always program a computer to make predictions and mobilise data in advance of the next cycle of activity. But any expectations would have to be spelt out in concrete fashion — each element would have to be mobilised individually and so it would take a lot of effort to generate predictions of much detail. The almost instant whipping up of a flexible, open-ended yet constantly hardening, state of readiness is not something that would seem to come naturally to a computer. The clunkiness of their processing style suggests that no matter what the number-crunching power of their circuitry, life would always remain a succession of surprises to them.

The strict fetch/execute logic of computers meant that cognitive science never quite got to grips with the idea of anticipation. Of course, there were honourable exceptions such as Bernard Baars who made context the cornerstone of his global workspace theory. And more particularly, Ulric Neisser of Cornell University in New York, who was actually one of the founding fathers of cognitive psychology, was always very clear about the fact that the brain goes into each moment fully primed.

In his 1976 classic, Cognition and Reality, Neisser argued at great length that perception was the result of a cycle of processing in which anticipations blur into confirmed sensation, so bridging any processing gap and also making the whole business of representation more efficient.

But it was hard for such insights to shape a generation of researchers brought up to think about the mind as a collection of modules and functions. Most of the cognitive scientists touching on anticipation tended to treat it as either some tacked-on feature — an ability wheeled out to deal with special cases like hitting tennis balls — or else described it using rather abstract terms, such as perceptual schemas or mental dispositions, which disguised the fact that anticipation was something that existed in time. A schema or a disposition was something that sounded as if could be fitted to the data at any point following its arrival. There was no implication that it was a state of information that the brain must rouse ahead of each coming moment.

With the move to a more dynamic, evolving, model of the brain, however, anticipation immediately becomes much easier to understand. Rather than being an extra feature that must be somehow laboriously welded on to the processing of a moment, the generation of expectations begins to look inevitable. It is something that a dynamically-constructed brain would do for free.

down at the neural level

The best evidence of what might be going on to produce a state of expectation at a neural coding level again comes out of the laboratory of Robert Desimone at the US National Institute  of Mental Health. Desimone's recordings from colour-coding cells in V4 showed how states of attention — or perhaps more accurately, states of intention — could tailor the firing response of an individual neuron. Desimone and his team then went on to explore the mechanisms of these effects in more detail. And one experiment in particular, reported in Nature in 1993, seemed to offer a lot of clues about the production of anticipations.

In this experiment, Desimone's team recorded from cells in the inferotemporal (IT) cortex of a monkey while it waited for a target image to appear on a screen. Desimone already knew that the IT area had maps coding for the sight of complex visual objects. It was the place where researchers in the 1970s found that waving a hand would produce a response from an anaesthetised monkey. More careful experiments had since proved that while some IT neurons were tuned to react to highly specific phenomena like hands and faces, most coded in a more general way. They did not code for particular experiences like grandmothers, but represented the perceptual elements—the assortment of shapes and textures — that might be needed to paint a population vote of a grandmother's face.

The most painstaking research had been carried out by Keiji Tanaka at the RIKEN Institute in Japan. Tanaka's method was to show an anaesthetised monkey a photograph of some natural object, such as a tiger's head, which would get a lot of cells firing, and then progressively simplify the picture until some chosen IT cell ceased to respond.  So with one cell, for example, the tiger's head was reduced to just a white square with two small black rectangles roughly where its ears would be. The neuron appeared tuned to representing this precise conjunction of features because when the square or rectangles were shown alone, there was no response.

With enormous patience, Tanaka followed this procedure for a great many cells and found they combined to represent a whole spectrum of visual primitives. There were cells that fired to T-shapes, stars, pairs of touching balls, and of course other common fragments of experience like hand shapes and face shapes. Other cells seem to specialise in coding for surface textures such as hairiness or smoothness. The IT neurons were also topographically arranged so that neighbouring cells had the same basic object preference, but with a slight shift in orientation, size or proportion.

For instance, within a group of cells responsive to star shaped patterns, some would fire at a peak rate to fat or many armed stars, while others might prefer skinny or sparsely-armed stars.

So the IT area seemed perfectly set up for population voting. Any kind of visual conjunction could be represented by a blend of firing. And Tanaka even showed that this grid of representation was adaptive — experience could produce long-term changes in the tuning of a cell.

In an experiment reminiscent of Merzenich's finger stimulation studies, Tanaka spent a year training a monkey to pay special attention to 28 target shapes. When he tested the monkey at the end of this time, he found that many more of its IT neurons now reacted to the shapes. The cells had shifted their tuning curves so as better to fulfil the demands of what had become a frequent sensory task.

Tanaka's work revealed a lot about the representational logic of the IT cortex — the principles behind its organisation. But Desimone wanted to discover what happened to such cells when they were called into action and were helping form part of a real state of consciousness. So an awake monkey was given the task of seeking and finding a series of pictures. In the experiment, a trial would begin with the display of a target — which might be a drawing of a small sailing boat. This would disappear and then, just three seconds later, reappear along with a second picture of perhaps something like a man's face.

The monkey was supposed to make a choice and signal recognition of the original image by flicking its eyes to look straight at it — the usual apparatus of eye-position coils picking up the direction of its gaze.

The design of the task meant that at a global level, the monkey had to do a number of things. It had to note and remember a target picture. Then it had to find it again and focus on it to the exclusion of all other stimuli a few seconds later. The resulting activity of the IT neurons displayed several interesting features.

The first significant finding was what happened to cells that were not involved in coding for the target, but instead coded for the ignored picture. A cell tuned to the sight of a face would burst into life every time a face appeared alongside whatever happened to be the target picture for the trial. The cell would fire at the rate of some 20 spikes a second to tell the brain what it was seeing. But then suddenly — within the space of 200 milliseconds — the firing of the neuron would be suppressed. Its firing would fall back to only about six or seven spikes a second.

This was, of course, a repeat of Desimone's original V4 finding. Any cell coding for a potentially distracting sight would be hushed up and physically pushed into the background of awareness. The brain appeared to create a consciously-focused state of representation by turning up the volume on what it wanted to hear and turning down the volume of any surrounding activity that might interfere. But the new point was that it took a little time for this attention effect to show itself. The face-coding cell fired brightly enough for nearly 200 milliseconds, signalling the presence of the distraction, and only later became damped by the more global needs of the brain.

In fact the same slight delay had been present in Desimone's V4 results, he just had not mentioned it at the time. The colour-coding cells had begun by firing brightly and then switched to a tuned response only after 100 to 200 milliseconds. However it was a crucial discovery because it said that the brain began with a raw response to the moment. Everything started with the chance of being represented. A tracery of mapping would course its way up the sensory hierarchy, so laying the foundation for at least a peripheral or preconscious level of awareness. It was only after a phase of basic sensory integration that the more global cross-currents of feedback and competition began to flow and have their effect on a cell. States of focus had to evolve.

For an explanation of anticipation however, what was more interesting was the behaviour of cells actually coding for the target of a trial. As might be expected, on first seeing the target presented alone, and again when the monkey had to distinguish it from a distractor, these cells fired at maximum strength. And tellingly — given Libet's half second claims — the presumably consciousness-producing firing always persisted for at least 500 milliseconds, even when it meant that a neuron was still going well after the picture had already been switched off!

But what was more important from the point of view of anticipation, was how the target-coding cells behaved during the short wait between the two exposures. Desimone found that their firing rate dropped, but they never actually went quiet. The cells kept up a chatter of six or seven spikes a second, as if coding for a state of memory or expectation. There was a template of activity, a gentle warming of the pathways in the IT area, which would match the coming experience.

By itself, Desimone's experiment did not actually prove anything. Recordings from solitary cells could only hint at how the IT neurons were interacting with each other — or indeed, with the rest of the brain's processing hierarchy — in representing a state of information. And simple firing rates were not everything anyway. The relative timing of each spike was likely to play a role as well. Yet the preservation of a slightly raised state of firing did make sense if the brain was seen as a dynamic system.

The computer model suggests that the brain is an inert lump of circuits awaiting input. Sensation is fed in one end and a hierarchy of mapping cranks out a state of consciousness at the other. But the dynamic view paints a very different picture. It says that the processing structure of the brain only exists in the first place because it has achieved a prevailing balance of tensions. Continual feedback pressure is needed to shore up everything from the transmission properties of an individual synapse to the mapping properties of a patch of cortex surface. So unlike a computer, things are going on even when the brain appears to be doing nothing. A quiet brain is still having to produce a state of tone. It has to give new input a surface which to disturb.

This brings up yet another uncomfortable feature of neurons which neuroscientists usually try to skirt around. They are in fact always firing. Every one of the billions of cells in our head is popping off at least one or two stray spikes each second. When developing theories about neural coding mechanisms, the temptation has been to dismiss this constant background rustle of activity as meaningless noise.  The message was believed to lie in the bright or synchronised firing and the odd pop of a neuron was seen as just the inevitable consequence of trying to do computing with sloppy biological components.

As watery bags of ions, cells could not help but leak a little current when not in use. This was no great problem because it was easy to imagine that the brain's coding mechanisms would included some sort of threshold setting to make sure that this idle tick-over firing was screened out when it came time to count an area's final population vote.

Yet as dynamicists like Karl Friston were beginning to realise, this background firing might not be so random after all. If brain cells are woven into a network of feedback relationships — connections that gave their own firing meaning — then the popping off of a neuron would not be noise but an expression of an underlying state of organisation. Cells would be triggering each other with skitters of activity in a way that reflected their connections. Or as Friston put it, an area of mapping would be cycling in an attractor state, some general balance of tensions.

This simple fact causes a 180 degree switch in perspective. A computer represents a state of  nothing doing by doing nothing—by having silent circuits. The arrival of input then forces it to go from a nothing to a something. But the brain works the other way round, starting with a state of firing that in some vague fashion represents everything ever experienced by an area of circuitry, then tilting towards some specific state of firing. It goes from a defocused representation of all it knows to a focused response to new input.  A cloud of everything condenses to become a something.

What this means is that even in a state of rest, at its most defocused, the brain is in some way prepared. Its circuits stand poised to be tipped into a more definite reaction. At a subjective level, we might experience this state of tone as a sense of readiness or potential. It is notable that when we shut our eyes, we see not blackness — an absence of information — but instead a shimmer of shape and colour. The rustling of our visual areas appears visible. They are already halfway to going somewhere. And because this state of tone is an active construction — a feedback alliance — it would be easy to begin tilting the balance of firing in a certain direction.

By lifting the background activity of a group of cells slightly — say neurons coding for the experience of seeing a picture of a boat — a bias could be set in place. Then when time came to run the competition, to discover what was actually in the moment, this slight edge of priming would nudge activity in the chosen direction. Through the power of feedback to amplify small differences, a slightly higher tick-over firing rate in a group of boat-coding cells would be enough to ensure that the claims of rival object-coding cells were drowned out as soon as an area like IT was driven into a mapping response. A monkey would find the sought for target being thrust into view.

With a computer, an active decision would have to be made about what kind of anticipatory bias to load into its circuits. But the brain already has all its information loaded. It is merely a question of how much to push the focus towards some specific experience. And there would be huge flexibility in creating a state of priming. The brain could rouse a wide area of circuitry to create a general readiness — a gentle pre-warming of all the pathways most likely to be involved in the coming bout of processing. Or it could raise the firing profile of a select group of cells to catch some more particular event.

But the real beauty of the system is that a state of anticipation would not prevent the brain fixing on something else. If someone had slammed the lab door while the monkeys were doing the experiment, the stimulus would be powerful enough to override the sight of the target.

The dynamic model says that an anticipation merely produces a fleeting tightening of the brain's processing landscape, perhaps deepening the basin of attraction for things like pictures of boats while raising the threshold for other experiences, such as pictures of faces. So a pattern of activity will fall more easily into a certain groove, but only if it was passing that way already. The brain always remains free to head in other directions if the moment does not work out quite as planned.

top-down logic

If Desimone's work gave some clues about how the brain represents a state of expectation or intention, there was still the bigger question of how the brain generates such a state. Where does the brain get its ideas about exactly what to anticipate? A dynamic view of brain processing gives a very simple answer. It says that predictions would flow quite automatically from whatever has just been the brain's last point of focus.

Anticipations are there to get us into the moment, to allow us to deal with a flood of sensation with great efficiency. Every instant comes packed with a vast amount of detail. Desimone's monkeys faced not just the sight of two pictures. Their senses were being assaulted by all the sights, sounds, smells and feels that go with being in a laboratory cage, watching a computer display. But if much of the thinking and experiencing has in some sense been done in advance, then most new input will slot straight into place. A ripple of adjustment may still have to be evolved, but it can take place locally and preconsciously. There will be no need to call on the global resources of the brain for a deeper, more considered reaction.

As Baars argued, attention would be reserved for whatever part of the moment could not be dealt with quickly and cheaply at a local level. Or alternatively, because the brain had set out to catch the event when it eventually happened.  Escalation into focal consciousness would occur either because something was particularly expected or unexpected, significant or surprising.

So the brain would arrive at a focus. Anticipation would act as a filter to screen events and leave some aspect of the moment standing proud. The brain would be left with a certain area of its memory and sensory pathways feeling sharply stimulated by what has just happened. Then being sharply stimulated, these areas would begin to rouse further thoughts and associations — surrounding areas of memory — that would quite naturally warn the brain about what might happen next. A growing state of anticipation would be also what we got out of the moment.

Take an example like the simple act of walking into our house one day after work and hearing our grandmother's voice coming from the living room. The cycle of processing would begin with an act of recognition. The brain's mapping hierarchy would pick up the pattern of noise trapped by the ear and draw it up through a stack of filtering to produce an organised state of representation.

At the bottom of the stack, on A1, the primary auditory cortex, there would be a tonographic map of a set of frequency densities. Then as this information was pushed through further layers of mapping and voting, it would begin to hit high level areas where it would become identified as the sound of voice — and a particular, known, voice at that. In the auditory equivalent of IT, there would be a population vote suggesting that there was a very high probability we were hearing our grandmother speak.

Of course, the dynamic model says there is rather more to establishing a meaning-imbued pyramid of mapping. Signals do not just flow up through the mapping hierarchy — a one way, bottom-up, traffic in information with the peak level areas somehow ending up doing all the experiencing. Instead, the many levels of mapping grow into a focused state of representation in concert. They evolve together through the reinforcing effects of feedback.

Raw sensation may arrive in the lower mapping areas to start the ball rolling. But their first response would be a little ragged and untuned. It would need the high level areas to begin voting for grandmothers for their activity to become confirmed. The dawning recognition at the top would feed down the chain to sharpen and strengthen the pattern of activity at the bottom. From memories of our grandmother's voice, we would be better able to separate the sound of her words from any blurring background noise, or bring out certain characteristics, such as a slight croakiness in her speech.

Hand in hand, over the course of a tenth of a second or so, the whole hierarchy would move from a rough preliminary network of voting to a crisp, stable, and highly interpreted state of representation. Our mental experience will be based on an inseparable mix of what our ears heard and what we felt they ought to have heard.

At the same moment, our brain would be mapping many other sensations. But something about the unpredicted nature of hearing our grandmother's voice would be enough to ensure its escalation into the spotlight of attention. Its representation would be kept burning as other aspects of the moment faded or, as Desimone's work suggested, became actively suppressed. Then with this lingering firing and a cleared deck would start to come a spreading stain of associations. There would be time for the flickers of feedback to rouse the various areas of circuitry to which our grandmother thoughts were connected, so bringing the right kind of new thoughts to the surface.

With population voting, the seeds of these ideas would already be present in the original response. The vote would stir a range of high level cells, some of which might be considered 100 percent grandmother neurons, firing flat out to the sound of her voice and little else. But many others might have fired at only 60 or 30 percent, being tuned more to experiences such as the sound of our grandfather's voice, the voices of other close family members, or even just the sound of an elderly voice generally. So in rousing enough cells to get a decent bearing on our grandmother, we could not help but fish up the faint corner of thousands of connected experiences.

For an instant at least, nothing much would come of these links. The state of representation would be drawn up too tightly around the sharp stab of the experience. But as the brain began to relax again, the excitement of the still firing cells could seep out to create an inflamed halo of associations around the original act of mapping. Not only would there be a broader arousal of our sound representing pathways, but the activity would cross over to stir areas in the other sensory modalities. The sound of our grandmother might rouse neurons that coded for grandmother-related sights, touches, and even smells and feelings — anything that we strongly connected with her.

We might only have heard her voice while walking down a corridor, yet already some of the key cells needed to process the sight of her face or catch the waft of her usual perfume will have been alerted. Given that we now would also be beginning to think about the room in which she must be sitting, the stage would be set for generating a whole range of predictions.
If the spreading ripples of activity remained confined to the uppermost levels of sensory mapping, then it is hard to say what kind of feelings might be engendered — perhaps not much more than a sense of preparedness to have certain kinds of experience, a vague and contentless  foreboding. But the fact that the cortex is a feedback-based system, with more paths returning back down its hierarchy than heading up, means that the grand stack of mapping can be turned on its head. Sensations might work their way in from the bottom. But there is no reason why a jangling of neurons at the top should not cause a cascade of mapping in the opposite direction.

The logic would go into reverse with sensory detail being added, rather than extracted, as the wave of activity ran back through the various mid level filters and low level maps. If pushed all the way down to the primary sensory surfaces, a high level inkling ought eventually to become fleshed out as a fully fledged sensory experience — a vivid mind's eye feeling of almost witnessing the real thing.

So, on hearing our grandmother's voice, almost immediately — within half a second, anyway — we might find a specific image flashing before us.  An outward and downward rush of association might produce the fleeting impression of seeing her bent forward in an armchair, her face turned towards us with the usual wry smile as we open the door. Drawing on all that was most characteristic of our grandmother — and also on our knowledge about the look of our living room — our brains might whip up a complete synthetic experience that would slot fairly seamlessly into our actual experience a moment later.

Clearly, the more unvarying an experience has proved to be in the past, the more accurate will be our predictions. For example, the sensations that are part of everyday actions such as reaching for a door handle, or changing gears in a car, can be anticipated in total detail. Indeed, our own movements will cause many of the feeling we experience and research has shown that the motor parts of the brain transmit their intention to move to the sensory areas a fraction ahead of time to give them actual warning.

But after changing gears or opening doors many thousands of times in our lives, the likely sensations will be very familiar anyway. In such cases, the cascade of priming activity back down through the processing hierarchy would form a cone that ran narrow and deep. Our memory banks would cast a sharp shadow across the primary sensory areas a split second before our hands came into contact with the door knob or gear stick.

But often the situation will be more open-ended. We will not really know what to expect, so any cascade of pathway rousing activity would have to be shallower, more diffuse. For instance, if instead of our grandmother, we had only heard the voice of some unknown elderly person on entering the house, then this would have generated a much more general state of anticipation in our minds. We would have recognised the oldness in the voice and so become primed for the sight of wrinkles and grey hair. But we would be much less likely to experience one particular anticipatory image.

Yet the point is that sharp or general, the generation of a state of anticipation would be automatic. The brain would isolate the most important event of one moment, then simply by lingering on its representation a fraction longer, it would begin to create a glowing halo of priming that prepared it for the next.

So dynamics brings the hierarchical organisation of the brain alive. The same neural machinery can be as quick to generate states of information as it is to extract them. Yet there are still some puzzles to be answered.

So far we seem to have been talking about a conscious level development of an anticipatory state. We become focally aware of some significant fact — such as the sound of our grandmother's voice — and start then to experience conscious level  expectations. Yet sports psychology studies seem to suggest that a lot of anticipation is done at a preconscious level. When players are asked what they look for in an opponent's ball toss or run-up, they cannot reply. They are certainly aware of a few things at a conscious level — such as the fact they are playing a game of tennis or cricket and that they need to concentrate. But even this urging themselves to concentrate amounts to no more than an attempt to keep their mind clear — to rid it of the kind of specific, consciously-experienced, thoughts that only seem to get in the way of a quick reaction.

It appears that automatic or reflex actions also have their own unthinking level of anticipation. And once more, this is not just a feature of playing sports. Even getting down a corridor to open a door involves a lot of subconscious skill and so subconscious predictions. At some level, our brains would have to be churning out a stream of anticipations to prepare our feet for accurate contact with the carpet or our hand for gripping the door handle.

However, as with Libet's freewill experiment, it is important to remember that none of this spontaneous activity — either the motor planning that picks up our feet or the sensory anticipations that guide their fall — can occur without some kind of prevailing context in place. Taking a footstep or reaching for a door handle may seem like acts unconnected with any thoughts we might be having about the conscious experience of hearing our grandmother.  Yet they are fragments of processing that only exist because of our greater goal of getting ourselves into the living room. 

In other words, implicit in the fact that are minds are prepared to jump straight to an expectation of seeing our grandmother is the belief that shortly we will manage to find our way into her presence. Our minds could detect no reason to expect any intervening obstacles and so the predicted impression of our grandmother becomes our prevailing context—the guiding image which will shape our actions for at least the next few seconds—and the business of walking and opening doors then become activities that organise themselves to fit.

As well-practised and easy to anticipate skills, there would be little need for us to bother with the details. The brain would deal with them at a quick, preconscious level, without requiring the kind of escalation, focal sharpening and prolonged exploration of possibilities that would make for a conscious state. It would only be when something went wrong — if the door turned out to be locked or a ruck in the carpet tripped our feet — that we would be forced to retarget our attention.

mental images

By now it should be coming clear that anticipation is as much about the control of motor output as it is a preparation to deal with sensations. Plans and intentions are really just another way of looking at the generation of an expectation — an expectation about what we will do rather than what the world is going to do.

And there is even one further riddle that is solved by an understanding of anticipation — that of mental imagery. A mental image is simply a state of expectation that does not get matched to an actual sensation. We go through the first half of the perceptual cycle, getting ourselves mentally ready to see or feel something, but that something then never turns up, leaving us with the ghostly glow of our own sensory priming.

This explanation makes sense of the often tantalising nature of our mental images. As has been seen, psychologists have had great trouble getting to grips with imagery. It took the evidence of a PET study even to persuade cognitive psychologists that images probably use the same topographical pathways as ordinary perceptions. Many thought that as a high level mental process, imagery should have its own brain areas and possibly even its own abstract neural code.

Kosslyn was able to settle this argument by demonstrating that the act of imagining a letter generated a network of activity that ran all the way down to V1. But this still left most with the rather clunky, computational, notion that imagery was a form of memory trace replay. If, for instance, we wanted to imagine a grey rhinoceros, then what our brains would do was dredge up a rhino outline from one memory file, take a splash of dusty grey from another, load both memory traces into a high level buffer and finally project the resulting picture across the display tube of the lower visual areas. However, a mental image is a far less concrete state than such a cut and paste model would suggest.

 For a start, most people find it impossible to keep a particular picture fixed in their heads for more than an instant. Almost as soon as an image appears, it begins to slip out of sight or transmute into something different. We might get a glimpse of a close-up on a rhino's dust-caked face, even seeing its ear flicking away a fly. But for most people, the image will be bright for just a split second before it starts to fade and, quickly, some other rhino image swells to take its place. Our minds might flit to a long-shot scene of a pair of rhinos stamping around a mud-hole or a particular memory of a rhino seen at the zoo. Like a slide show, a succession of images will run through our heads, never giving us time to dwell on any one impression.

Anticipation explains this in-built restiveness. The brain was never really designed for contemplating images. Our ability to imagine and fantasise is something that has had to piggy-back on a processing hierarchy designed first and foremost for the business of perception. And to do perception well, the brain needs a machinery that comes up with a fresh wave of prediction at least a couple of times a second — or about as fast as we can make a substantial shift in our conscious point of view. So while we can drive the brain briefly into an artificial state of anticipation — a state of sensory expectancy that we know is not going to be answered—it would be unnatural for the brain to linger and not move on.

Anticipation also accounts for the often nebulous character of mental imagery. Some images are undoubtedly sharp and vivid. In these cases, we would expect to find the priming activity reaching all the way down the sensory hierarchy so as to pick up the maximum amount of detail.  And this is, of course, exactly what Kosslyn demonstrated with his PET experiments, where topographical patterns were found on V1 itself.

However Kosslyn's tests were designed to produce well-fleshed out states of imagery. His subjects were asked to visualise copies of letters they had only just viewed. In everyday life, the same kind of vivid state of priming would be stirred if we were searching the hallway for a missing set of car keys, or about to make thudding contact with a cricket ball — situations where the details are highly predictable. Yet an expectation can equally well be broad and shallow. A gentle and diffuse spread of activity might leave us with just the feeling of being generally orientated towards the idea of seeing rhinoceroses. We might not have an actual rhinoceros image in mind. But we would have a strong sense of potential for moving towards such an image as soon as the need arose.

So anticipation and imagination are fundamentally the same. The difference is that an anticipation is a prediction tied directly to what is happening around us at the moment. But with a mental image, we are putting ourselves in some other place and asking our brain what life might look like from there. From years of visiting zoos, watching TV wildlife documentaries and reading National Geographic magazine, we will have well stocked memory banks. All it takes is to activate the right spot and then let the spreading flow of activation do the rest. The brain would need no special circuitry or cortex areas. It has one hierarchy, but it is a hierarchy that can be exploited in many different ways.

This is not a new idea. In the 1970s, long before the Kosslyn-Pylyshyn debate ever took place, Ulric Neisser was able to write: "...images are indeed derivatives of perceptual activity. In particular, they are the anticipatory phases of that activity, schemata that the perceiver has detached from the perceptual cycle for other purposes." But it was only with the emergence of a more dynamic understanding of the brain in the 1990s — with a general shift in context — that mind scientists began to feel that such explanations appeared rather obvious.

references

Keep your eye on the ball: Research shows that players have to predict where to focus their eyes, moving them into place well ahead of time. See "Why can't batters keep their eyes on the ball," AT Bahill and T LaRitz, American Scientist 72, p249-253 (May-June, 1984).

The gifted athlete seems able to conjure with time: See "Good Timing," J McCrone, in The Science of Sport, a supplement to the New Scientist, p10-12 (9 October 1993).

Top athletes score averagely in reaction tests: "Acquiring Ball Skill: A Psychological Interpretation by Harold Whiting (London: Bell and Sons, 1969).

McLeod's cricket player study: "Visual reaction time and high-speed ball games," P McLeod, Perception 16, p49-59 (1987). For details of the time constraints in cricket, see "Mechanisms of skill in cricket batting," B Abernethy, Australian Journal of Sports Medicine 13, p3-10 (1981).

Cricket balls can develop a late swing: For the physics of spinning balls, see "The seamy side of swing bowling," W Brown and R Mehta, New Scientist, p21-24 (21 August 1993), and "Working knowlege: baseball pitches," AM Nathan, Scientific American, p83-84 (September, 1997).

Electrical recording of the muscles: For review, see Psychophysiology: Human Behavior and Physiological Response by John Andreassi (Hove, England: Lawrence Erlbaum Associates, 1989).

Film clips demonstrate anticipation: "Anticipation in sport: a review," B Abernethy, Physical Education Review 10, p5-16 (1987), and "Visual search strategies and decision-making in sport," B Abernethy, International Journal of Sport Psychology 22, p189-210 (1991).

Brain predicts when reaching: The Neural and Behavioural Organization of Goal-Directed Movements by Marc Jeannerod (Oxford: Oxford University Press, 1988). The psychophysics literature is generally full of evidence for anticipation. One broad area of research concerns what is known as reafference messages or corollary discharges—the idea that the motor areas of the brain need to tell the sensory areas about planned actions so that this self-generated movement can be subtracted from the conscious experience. See, for example, "An internal model for sensorimotor integration," DM Wolpert, Z Ghahramani and MI Jordan, Science 269, p1880-1882 (1995), and "Visual decomposition of colour through motion extrapolation," R Nijhawan, Nature 386, p66-69 (1997). Another telling line of research is work on priming—particularly the distinction made between conscious priming leading to a narrow state of expectation, while subconscious priming produces a more general, open-ended, associative state. See "Is human information processing conscious?" M Velmans, Behavioral and Brain Sciences 14, p651-725 (1991).

Exceptions who took anticipation seriously: See A Cognitive Theory of Consciousness by Bernard Baars (Cambridge: Cambridge University Press, 1988), and Cognition and Reality: Principles and Implications of Cognitive Psychology by Ulric Neisser (New York: WH Freeman, 1976).
        Of course, many other individuals have put anticipation centre-stage. Both Wundt and James considered the issue in depth—see James's discussion of the experiments of Wundt and others in The Principles of Psychology by William James (Cambridge, Massachusetts: Harvard Univesity Press, 1981). See also "William James symposium: attention," DL LaBerge, Psychological Science 1, p156-162 (1990), for a review of how modern James ideas still sound. Other more recent instances can be found in Attentional Processing: The Brain's Art of Mindfulness by David LaBerge (Cambridge, Massachusetts: Harvard University Press, 1995), "The attentive brain," S Grossberg, American Scientist, p438-449 (September-October,1995), "Memory of the future: an essay on the temporal organization of conscious awareness," DH Ingvar, Human Neurobiology 4, p127-136 (1985), Attention and Effort by Daniel Kahneman (Englewood Cliffs, New Jersey: Prentice-Hall, 1973), and Preparatory States and Processes, edited by Sylvan Kornblum and Jean Requin (Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1984).

Desimone's search image experiment: "A neural basis for visual search in inferior temporal cortex," L Chelazzi, EK Miller, J Duncan and R Desimone, Nature 363, p345-347 (1993). For an early hint of the same finding, see "Activity of superior colliculus in behaving monkey, II: effect of attention on neuronal responses," ME Goldberg and RH Wurtz, Journal of Neurophysiology 35, p560-574 (1972). For review, see "Seeing the tree for the woods," A Cowey, Nature 363, p298 (1993), and "Neural mechanisms of selective visual attention," R Desimone and J Duncan, Annual Review of Neuroscience 18, p193-222 (1995).
      Changes in rates of firing are one way to prime an area of circuitry. A second way would be an anticipatory shift in the level of synchrony—and, indeed, recent research has suggested this happens. See "Spike synchronisation and rate modulation differentially involved in motor cortical function," A Riehle, S Grün, M Diesmann and A Aertsen, Science 278, p1950-1953 (1997).

Tanaka's study of coding in IT: See "Coding visual images of objects in the inferotemporal cortex of the macaque monkey," K Tanaka et al, Journal of Neurophysiology 66, p170-189 (1991), "Neuronal mechanisms of object recognition," K Tanaka, Science 262, p685-688 (1993), and "Optical imaging of functional organization in the monkey inferotemporal cortex," G Wang, K Tanaka and M Tanifuji, Science 272, p1665-1668 (1996).

Experiment reminiscent of Merzenich's finger studies: "Long-term learning changes the stimulus selectivity of cells in the inferotemporal cortex of adult monkeys," E Kobatake, K Tanaka and Y Tamori, Neuroscience Research 17, p237 (1992).

Neurons are always firing: It is rare to find any textbook that makes play of this fact, even though it is obvious in every recording of a neuron. The computational view is that firing is stimulus-driven and so a cell's "baseline" firing—its spontaneous or random activity—is essentially meaningless. One of the few to take a continual state of representation as a starting point for theorising is Walter Freeman—see Societies of Brains: A Study in the Neuuroscience of Love and Hate by Walter Freeman (Hove, England: Lawrence Erlbaum Associates, 1995). Although, of course, Freeman hates the term "representation" because of its static overtones and prefers instead to speak of a state of neural intention.
         A further point not made often enough is that neurons really seem designed to communicate news about significant changes in their input rather than report raw values. A cell will quickly adapt to the constant sight of a red light or whatever else it is supposed to be tuned to detecting. So the idea of a fixed feature-coding device is even more mythical. For review, see "More than just frequency detectors?" AM Thomson, Science 275, p180 (1997), and "Computation and the single neuron," C Koch, Nature 385, p207-210 (1997).
    
We don't see black when eyes are closed: Some argue that the spontaneous rustle is merely the stray pop of retinal cells, others that it is the chatter of cortex pathways. Most likely it is both. Any pressure on the eyeballs certainly sparks a flood of lights, suggesting we are seeing "real" retinal input—at least while the cortex itself is still in a state of taut alertness. But as the cortex circuits themselves become relaxed and decoupled, as in the hypnagogic state on the edge of sleep, the spontaneous activity we see becomes more vivid. There are sudden floods of colour and usually crawling or spiralling patterns—and even fleeting, ghostly faces—will be seen. Then when we enter the sleep state proper, as the brain is shut off from external stimulation by a gating of the thalamus, our minds erupt into full-colour, dream-like imagery. The stray firing appears to self-organise to produce fully-fledged psuedo-experiences.
         See The Perception of Brightness and Darkness by Leo Hurvich and Dorothea Jameson (Boston: Allyn and Bacon, 1966). And for a discussion of hypnagogia and dreams states, see The Myth of Irrationality: The Science of the Mind From Plato to Star Trek by John McCrone (London: Macmillan, 1993), Dying to Live: Science and the Near-Death Experience by Susan Blackmore (London: Grafton, 1993) and Hypnagogia: The Unique State of Consciousness Between Wakefulness and Sleep by Andreas Mavromatis (London: Routledge and Kegan Paul, 1987)

Anticipations flow from whatever has just been escalated: A point well made in A Cognitive Theory of Consciousness (Baars, Cambridge University Press).

Activity would stir other sensory modalities: There is no definite story of how modalities connect. There appears to be a convergence on the hippocampal formation but multimodal overlap occurs in the prefrontal cortex and even lower motor areas. Some have even singled out sub-cortical structures like the superior colliculus, claustrum and cerebellum. In truth, cross-modal convergence probably happens at many levels of the processing hierarchy. The phenomenon of synesthesia suggests that lines may even connect unimodal mapping areas like V4 and the auditory cortex. For one view, see The Merging of the Senses by Barry Stein and Alex Meredith (Cambridge, Massachusetts: MIT Press, 1993).

Grand stack of mapping can be turned on its head: Again, even at the turn of the century, such speculation was common. See The Principles of Psychology by William James (Cambridge, Massachusetts: Harvard Univesity Press, 1981). For modern examples, see "Adaptive resonance theory: self-organizing networks for stable learning, recognition, and prediction," S Grossberg and GA Carpenter, in The Handbook of Neural Computation, edited by Emile Fiesler and Russell Beale (New York: Oxford University Press, 1997), "The role of attention in auditory information processing as revealed by event-related potentials and other brain measures of cognitive function," R Näätänen, Behavioral and Brain Sciences 13, p201-288 (1990), and "Visual search and stimulus similarity," J Duncan and GW Humphreys, Psychological Review 96, p433-458 (1989).

Motor areas transmit intention to move: The best proof comes from eye movements and the illusory stability of our visual experience. See "False perception of motion in a patient who cannot compensate for eye movements," T Haarmeier et al, Nature 389, p849-852 (1997), and "A theory of visual stability across saccadic eye movements," B Bridgeman, AHC van der Heijden and BM Velichkovsky, Behavioral and Brain Sciences 17, p247-292 (1994).

Neisser wrote imagery was first half of perceptual cycle: Cognition and Reality: Principles and Implications of Cognitive Psychology (Neisser, WH Freeman). See also "Theories relating mental imagery to perception," R Finke, Psychological Bulletin 98, p236-259 (1985), and "The nature of imagery," PV Horne, Consciousness and Cognition 2, p58-82 (1993).

home> back to readings