Understanding the brain: a work in progress

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

How billions of interconnected cells in the brain can interpret and regulate all our bodily functions as well as mediate our experiences of interactions with and responses to the world around us is a huge and fascinating question that many different disciplines have attempted to tackle.

This lecture will consider what we have learned so far about the principles of neural encoding and how they may begin to explain our memories, emotions and conscious awareness.

This is the first in a series of three lectures. The others take place on 26 January and 17 March 2011.

Download Transcript

Gresham Lecture, 22 November 2010

 

Understanding the brain:

A work in progress

 

Professor Keith Kendrick

 

“Understanding the brain” is obviously a rather presumptuous title and our understanding definitely is a work in progress.  I gave at Gresham about six or seven years ago, with a similar kind of title, and things have moved on already quite a lot in that time, and while I cannot possibly give a complete detailed understanding of every aspect of brain function – I am going to be fairly selective – I would like at least to try to sort of generate your interest in the ways that we are beginning to think about what is important about the way the brain is actually encoding information.

 

Of course, we all know that brains come in a variety of sizes and shapes, although in fact their general structures are fairly similar, whatever species you look at. 

 

We also know of course that the brain has an absolute fantastic range of functions that it fulfils, one of which we are not aware of, but without it, we would be dead: controls body functions and motivates us to obtain appropriate resources to maintain life.  Of course, this is especially true for people who work on movement and feel that the brain is only important for controlling motor function; of course, without movement, humans would not be able to do very much at all.

 

The area in which I am more involved is detecting and interpreting sensory information, and particularly, in my case, social cues, for social species of which we are one.

 

One of the marvellous things that the brain can do, which distinguishes it from most computers, is that it can highly selectively attend to specific things rather than others, and almost filter out things that it is not interested in, and just focus on whatever it is that we are interested in at any particular point in time. 

 

Again, another area that I am really very much involved in is the brain’s fantastic capacity for learning and remembering information, and particularly the way it integrates it with past knowledge.

 

I am not going to talk actually very much about emotions today, but clearly, one of the important aspects of emotional responses is that they are also very potent guides for behaviour.

 

Finally, the area that I will cover right at the end of the lecture, which is perhaps the most contentious and the most difficult for any scientist to get their head round, is how a bunch of neurons can generate conscious awareness of the external environment, self and others. Yet, we are making some advances into trying to understand what the difference is between a brain which is working in conscious mode as opposed to one that is working in unconscious mode.

 

It is traditional to emphasise really how fantastic the brain is by immediately comparing it with computers.  The human brain is 920 cm3.  It is 1.5kg in weight. It has an amazing 100 billion nerve cells, and given that there are about 5,000 synaptic connections on each of these nerve cells, that equates to about 0.5 quadrillion synaptic connections, which would be 0.5 petaflops if you were talking about computers. If you consider that perhaps they are usually firing away at around maybe 10 hertz, then that gives 5 quadrillion, or 5 petaflops, of synapse operations per second, and that generates 10 watts.  That is quite an amazing amount of processing.

 

If we compare that to computers over the last 10 years, things have developed quite amazingly in terms of processing power of computers.  Back in 2000, it was 7.2 teraflops, which is equivalent to 7 trillion synapse operations. By 2002, it had gone up to 35.8; 2004, 70; 2005, in fact quite a big leap, up to 280 teraflops; and finally, it gets up to pretty close to broadly what the equivalent of a brain would be by 2007 and 2008; and now, it has gone beyond that. The big guy is the IBM Sequoia Supercomputer that is currently in the process of being commissioned.  This is reportedly going to have 20 petaflops speed which is substantially more than the brain, 1.6 petaflops of memory, but, notice, it is 318m2, 96 racks of all this stuff, and it generates 7 megawatts of power.  So the computers are not doing things the same way that the brain is.

 

Of course, rather than transistors, we have neurons, and they are simple but also amazingly complicated.  We have information, in the form of electrical impulses, which are transmitted from neurons to the dendritic receiving field of a neuron so this is where the 5,000 input synapses are.  They alter the electrical properties of the membrane of the cell, and when it reaches the critical threshold, it fires an action potential, which gets transmitted down the axon, through to communicate with the dendritic field of another neuron.  It is that simple, but it is phenomenally integrated and complicated, if you think about the number of connections that are going on and how much even one neuron can affect the activity of many others.

 

What is often forgotten is that we all focus on brain neurons, because that is the primary cell type transmitting electrical impulses in the brain.  However, there is a very important supporting cast, without which the brain simply would not function, and these are called neuroglial cells. 

 

The primary one of these, astrocytes anchor neurons to blood vessels and they transport nutrients and waste away from them, so they are highly important to the function of neurons.  They have receptors, just like neurons, they produce growth factors, just like neurons, and they can even act to modulate the activity of neurons through synaptic transmission.  Finally, they can also actually signal to one another, but they do not use chemical synapses, they use direct electrical connections called gap junctions, and calcium is important in this type of communication.

 

The brain also has its own immune systems.  Microglia are a defence against pathogens and they monitor the conditions of neurons.

 

Ependymal cells which line the brain ventricles are full of cerebrospinal fluid, which is also very important for carrying information around through the brain, so these are generating or producing and transporting cerebrospinal fluid.

 

Finally, there are the oligodendrocytes, which are important for producing the myelin sheath around axons, which helps them transmit information.

 

So this supporting cast is often forgotten, and I am not going to spend any more time talking to you about it, but they are very much important for regulating brain function and particularly neurons.

 

I am not going to spend much time discussing the molecular brain as I want to go at a higher level and show how neurons communicate and represent information. We know a huge amount now about the transfer of chemical signals from the synapse to the membrane which receives the neurons and about the nature of the receptors in the membrane which allow chemicals to affect the cascade of changes within the intra-cellular domain of a neuron and alter its function permanently.  You can see that there is increasing knowledge of these fantastically complex intra-cellular signalling pathways, which are very useful for us to understand, particularly when it comes to drugs that target generally function of neurons.

 

Let us step back from the molecular and cellular levels and just consider the major subdivisions of the brain, because they are pretty similar regardless of the species.

 

We have the brain stem, the pons and the medulla, which you will find in all brains, and this is a highly important region for controlling, automatically, pretty much, all of the peripheral organs in the body, through the parasympathetic and sympathetic nervous systems.  When someone says, “You’re brain-dead,” it is this area that has ceased to function, because once this area has gone, you are dead. The cortex may still be capable of things, but it is not possible for you to survive once activity in this region has gone. 

 

On top of that, we have the so-called reticular activating system, which is mid-brain, and activating the cortex, permanently arousing it and allowing it to function, and is also going to be very important for controlling the levels of arousal, sleep, and also consciousness. Again, we focus a lot on the functions of the near-cortex, the most perhaps interesting part of the brain for most, but it is useless without these basic brain stem and mid-brain activating systems.

 

Over the years, we have begun to learn certain principles regarding how brain systems work. One of these is this concept of neural plasticity. I will not spend an awful lot of time on it, but the whole really interesting thing about neurons is that they are in the constant state of flux.  They change as a function of activity, and this is generally termed as neural plasticity.  This Canadian scientist, Donald Hebb, in the 1950s, came up with a very simple rule, which was: when an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change occurs in either one or both of the cells, such that A’s efficiency, as one of the cells firing B, is increased.  I know that sounds mind-boggling simple, but that is what happens, in a large scale, within the brain.  There are changes going on all the time which alter the efficacy of one neuron driving another.

 

We know that we learn things by just watching other people all the time; the brain has a phenomenal ability to take shortcuts and learn things much quicker than any other kind of artificial system. One of the most exciting discoveries in fact in neuroscience in the last 10 years has been the discovery of what are loosely termed mirror neurons. These are particularly in areas that border the motor cortex. Imagine, for example, neurons firing in these regions when a monkey is reaching for a piece of food. The cells fire away strongly, but when the monkey is not doing anything except observing a human doing exactly the same action, there is exactly the same pattern of cell firing. This has been shown now in humans as well as monkeys, and it seems that there is this fantastic system which is allowing us to, as it were, practise what somebody else is doing, without actually having to do it ourselves, and that gives us an amazing capacity to learn by watching others. 

 

Unsurprisingly, one of the extrapolations from the discovery of this kind of concept may be at the heart of very complex human emotions, like empathy which is the ability to put oneself in somebody else’s place.  That has not been shown but at least it finally gives us some principle upon which the brain can mimic the actions of others, and is particularly well adapted to do that.

 

This is a Functional Magnetic Imaging study on control human subjects versus autistic individuals, and the take-home message is that: the mirror neuron complex is not as strongly activated in the brains of autistic individuals as it is in the brains of non-autistic individuals, perhaps suggesting that this is one of the problems that autistic people have.

 

So, how is information represented in the brain?  This is really what I want to mainly discuss with you.  This is obviously a spoof picture of the male brain.  The main thing that it is supposed to show is this idea that everything is compartmentalised. For example, the ironing representation is almost miniscule and the “listening to children cry in the middle of the night” gland is so small that you need a microscope to see it. 

 

I would like to disabuse you of this idea. If you look at FMRI studies, the way they are done is that they are averaged, you subtract a sort of control period from an experimental period, and you end up with a hotspot.  That is what different between the brain doing one thing and doing something else.  It is very easy, from that, and from this kind of representation of the brain, to get the take-home message that there are specific regions of the brain that do one particular function.  Yes, as we will see in a minute, there are areas of the brain that are particularly specialised for important functions, but it is much more important to have an understanding that the brain is a highly integrated organ, and there are interactions going on across all the different regions all the time, and that is probably, as we will see later, what generates consciousness.

 

So, I have given this concept of spatial encoding, which was humorously depicted in that image of the hypothetical male brain. This is a very simple schematic diagram which helps me to explain the advantages and disadvantages of spatial encoding - information spatially separated in terms of its representation within the brain.

 

Imagine that a triangle represented a single piece of information. The nearer they are to one another, the more likely that information from one to the next is relevant and vice versa. If all the triangles were put end to end it would take a hell of a lot of space to represent information this way. The other thing that should come across immediately is that, if you want to integrate information from one end of the triangles to the other, you have to have very long connections, and a lot of them. This is not the most efficient way, in many cases anyway, of integrating information.

 

So, it is far better to end up perhaps with overlapping triangles - overlapping populations – and this helps a lot with integrating information, however, there is still have a degree of separation. Clearly, you can get more interference. Or you could go, literally, down to everything being represented by the same population of neurons, and with very subtle differences in the pattern of activation you could actually distinguish one piece of information from another.

 

You can see examples of all of this in different systems in the brain.  This was just to remind me that, if we had brains that were organised this way, it might seem like a great idea, but we would have to have extremely large brains and heads to cope with it.

 

We are still in the early stages of trying to unlock the temporal code of the brain, but it is clearly an essential aspect of understanding how brains encode information.  We have been very much fixed in terms of the gain of neurons either going up or down, and that is mostly what we measure with imaging studies; the actual firing rate of neurons, either increasing or decreasing. However, that ignores, to a large extent, the fact that lots of different neurons are firing, and that they do so obviously in time, and we want to know whether there is there anything in the way that they are organised temporally which shows that there is information content. 

 

There are two things that come out, very quickly.  There is a huge focus on the extent to which information across neurons in a network are temporally correlated and this is an area in which I have been particularly interested in as that is clearly something that does happen. This are real data from the smell system in a rate, before and during an odour stimulus, and although it is difficult to see in fact the extent to which the four neurons – the vertical deflections are the output of individual neurons - become less correlated during the odour stimulus than they were before. I especially want to point out from this image, which is really quite exciting, that there are patterns.  We use software which recognises non-random patterns.  It comes from different forms of analysis into which I shall not go, but it is a very robust way of detecting patterns in time series of information, where you can record from perhaps, in our case, often several hundred neurons at the same time.  This picks out the incidence of a pattern that is occurring across the four neurons, which is DABC.  You can probably see the colour scheme, because it is pulling them out down here – it is orange, red, green blue.  This sequence is occurring across the four neurons again and again, and you will notice that, during the stimulus, it is actually occurring far more often than before. 

 

Now, the presence of these kinds of patterns within large-scale neural networks intimates that there is a huge amount of potential additional information processing power that is going on in these circuits. Quite apart from whether they are increasing or decreasing their firing rate, there are also patterns that they are generating.  Clearly, these patterns have to be decoded downstream, and that is difficult, but it does seem that neurons are capable of responding differently to different temporal patterns and inputs, so they are capable of decoding temporal sequences of patterns like this.

 

This just shows you an example.  This is the olfactory bulb schematically.  This was one pattern that has been generated of about seven elements – it can go up to about 20.  It only occurs, as you would expect, during the period when the animal is actually inhaling.  This is an anaesthetised animal in fact.  So these patterns are occurring each time the animal is actually inhaling the odour.

 

This just shows that the incidence of the patterns increases during odour stimuli.  It also increases with the intensity of the odour, so it is going to be meaningful.  Not only does the incidence of patterns increase, but also, the complexity, the number of elements in the pattern, increases during the stimulus, and also, once again, during odour concentration.  So this is an exciting way of really unravelling a whole new form of encoding information in the brain that is quite distinct from firing rate changes, and is showing us that the brain is actually doing a huge much more information processing than we have yet really understood.

 

So, the conclusion is that you need a combined spatial and temporal encoding system in the brain.  We still know more about spatial than we do temporal. Nevertheless, this is clearly the most robust solution, and also, most importantly, allows brains to be of a reasonable size for our bodies.  It also makes it easier, to separate, integrate and decode information.

 

A lot is known about the sensory brain, and this does show you, to start with at least, that the senses tend to be spatially represented.  All the different senses, the auditory sense, taste, smell, vision, somatic sensory, touch and motor systems, all seem to have, or at least at the level of the cortex, rather separate representations. They also seem to map quite faithfully onto external representations in the world. 

 

So the pattern of information impinging on the retina here is very simple. There is a similar pattern of activation in the visual cortex, and the same goes with the somewhat more complex radial stimulus, showing at least a relatively loose representation, again, at the level of the visual cortex.

 

A similar thing goes with the tonotopic maps in the auditory system; the auditory cortex separates off at different frequencies.

 

The strongest spatial maps are in both somatosensory and motor systems. There are these two bands of cortex in the middle, and this is the representation of different parts of the body, both in terms of the motor system and the somatosensory cortex. You can see that they are very distinctly represented, and it seems to be very important for both these systems.  Indeed, there are a number of conditions. If you practise enough any particular motor task you will increase the representation of that part of the body in the cortex.  Of course, especially with juxtaposed things like fingers, you can end up with a representation becoming overlapped, and they interfere with one another. That does happen in some cases, and is a very good example of why this particular type of motor control and also somatosensory control needs to have a very strong spatial distinction between one part of the body and another.

 

This is the way the brain sees a male and perhaps the way many women see males as well. It is obvious which parts of the body are most represented.

 

However, although everything, such as the visual system, the auditory system and the taste system appear to be separately represented, they do actually integrate and even interfere with one another a great deal.  There are key multisensory areas of the near cortex, for example, where multiple senses converge. 

 

You can also see examples of a number of illusions – I will mention a couple – where you can see that sight and sound, for example, are interfering with one another.  Normally, they expect to complement one another, but if you make the situation so that the information mismatches, you can see how important the integration between the two senses are. 

 

So, for example, this is the so-called McGurk illusion, which I think was shown on Horizon the other day.  So you see a mouth shape the word “bait”, but you hear the word “gate”, and what you actually think you hear is “date” – they confuse one another.

 

Another one of these is facial expressions, even if they are not actually even consciously perceived. They can modify the perception of emotion in the voice of a speaker.  So if someone is speaking in an angry voice and smiling, it makes you very confused. Of course, there are many other examples of how the brain expects to see things in a particular way, and when it does not, it gets confused. 

 

One of the things the brain is very good at is interpreting things as it expects them to be, given the light of previous experience. Of course, in a lot of cases, they do not actually appear in that way, but the brain thinks they do.

 

This idea of keeping things separate, especially in sensory systems, is particularly shown, the importance of it anyway, in a condition known as synaesthesia, which Richard Cytowic has done quite a lot of work on, as have many others.  Here, these are individuals who actually see or hear specific words, for example, or letters, or numbers, as colours, or, another example, tasting shapes. There was a lady who felt musical instruments as if they were touching her on different parts of her body.

 

This particular condition has been shown to be quite prevalent in fine arts students, for example, about 23%, and it is almost certainly now recognised as being caused by cross-wiring between these spatially separated sensory maps.

 

Perhaps more interestingly than that, it is now generally thought that this kind of synaesthetic experience of the world, where senses are not necessarily interfering with one another, but more integrated, is the way that young children experience the world early in life.  It may explain why everything goes into the mouth in babies.  It is now thought that, in fact, the developing brain is very much the sort of synaesthetic brain, where all the different senses are, to some extent, overlapping, and that, as you develop, they become separated and give the experience that most of us have as adults.

 

Other things in the brain are also very important for us to keep separated, and these are what are often loosely called the ‘what and where’ pathways in the brain.  So, what something is is primarily, at least in the visual system, processed by a ventral stream from the visual cortex, down into the temporal lobe where there are specialised circuits for responding and identifying faces and face emotions. There is a separate dorsal stream up through into the parietal cortex, which deals with where things are in space.  There are interactions between the two, obviously – you need to know what and where – but it seems to be important, as far as the brain is concerned, to try and keep these two things separate.

 

I am now going to focus on faces, mainly because the work that has been done in this area particularly illustrates what we are grappling with when we are trying to understand how brains are representing highly complex information.  I will not give you all the answers, but at least I think I will be able to show you some interesting ways that the brain, particularly in this part of the brain which is processing faces in a rather specialised way, seems to be using different ways of encoding information.  It does amazing things.  It is able to sort out questions of ‘who are you?’, ‘how do you feel?’, and ‘do I like you?’ all in less than 300 milliseconds.

 

Three species really have so far contributed to our understanding of how the brain actually fulfils this all-important social recognition function. Initially, work was done on non-human primates, monkeys, and also humans, but also, over the last 25 years or so, I have also shown that in fact sheep and probably many other ungulate species have similar specialised face recognition systems that allow them to recognise each other and in fact other species such as humans as well, and that includes being able to recognise face emotions.

 

The same system exists in all of the species for recognising faces. The fusiform face area is the specialist area for the human brain which would tend to be called the infra-temporal cortex in other species. This interacts particularly with the brain amygdala, which is a very important structure for the control and recognition of emotions, and the interactions between these two areas are very important for understanding face emotion, for example.  These areas also interact with a very important area of the brain, the medial pre-frontal cortex, which deals with higher cognitive functions and executive function and so forth.

 

Now, this particular area of research posed problems when it first came out because, up until the point where people started looking at the face recognition system, all of the work on looking at the complexity of encoding by single neurons in the brain had perhaps disappointed many people, particularly for example, Hubel and Wiesel, who were working on the visual cortex.  They were expecting to find neurons that were responding to specific kinds of objects, but they did not.  They found the majority of neurons responded only to very simple visual aspects, for example, lines moving in a particular direction and colours and so forth.  It seemed that the brain was primarily breaking up, for example, the visual world into a myriad of component parts, and that there did not seem to be anywhere where they brought it back together into a sort of single percept, an object. 

 

However, suddenly, with the work of Charlie Gross, at Princeton in the US, back in the 1970s, he suddenly started to find cells in this temporal lobe which responded to specific faces or body parts, and that led to quite a debate on conceptualisation of the way the brain was actually processing information. 

 

There was a kind of reductio ad absurdum that occurred, that people suddenly were saying that if you have specific cells that respond to your grandmother and need to be activated for you to recognise her, but, by definition, these high order cells are very few in number, and so, you could go out to the pub one night and have rather too many glasses of your favourite tipple, and granny would disappear because you would have killed off, you know, this handful of cells.  Of course, that is not the way it works at all. What we think is that the generation of these very high order cells respond to things like specific individuals’ faces and there are not very many of them. They do help for assisting the speed and accuracy of recognition of grandmother or anyone else, but in fact, the recognition process involves all of the different levels of analysis, right the way from simple aspects of granny’s face, right the way through all of these different components, and you can put in as many as you like.  So, in the end, granny’s representation is quite a distributed network, which happens to have these high order cells at the top of it.

 

Indeed, with the work on sheep that I have done, where we look at the process of forgetting faces over time, what happens is the specificity of these high order cells gradually disappears. If you are not seeing the person every day, it does not seem you need them anymore, but you can still recognise them, just not quite as well. So that kind of emphasises the fact that recognition is not about these high order cells being activated; it is about the whole network being activated.

 

Here is an example of a high order cell, a well-known Pamela Anderson cell, which was reported in Nature in 2005.  It was not just her – there were many other actresses, and these were all male subjects of course. They were all subjects who were about to undergo operations for epilepsy, and it has actually been an extremely useful, important source of being able to do brain recordings in humans, that a lot of these individuals, in order to identify exactly what part of the brain to operate on, have implanted electrodes for some weeks, very often, before they are operated, and that allows us to gain, through them, insights into the way the human brain is operating, in a degree of detail that previously of course we could only have done using techniques in other species.

 

This is a conceptual cell.  They were shown Pamela Anderson in various different views, but the cell – shown by the histograms – fires very strongly just to her name, so it does not just respond to a face, it is anything to do with the concept of Pamela Anderson, so it is a very high order cell.

 

This area of the human brain, the fusiform face area, shows a greater activation to both faces and also, very importantly, other aspect of our bodies which also communicate information, so faces and bodies seem to be particularly represented in this area as opposed to other kinds of objects.

 

For a while, it was thought that the kind of face emotions and face identity systems were very separate and that the fusiform face area was all about encoding identity. More recently, it has been shown that the emotion content of the face can enhance the response in the fusiform face area.  This is another FMRI study, where you can see, the red line which is the level of activity with a fearful face compared to the same face which is showing a neutral expression. This shows us that, within particularly the interactions between the amygdala and the fusiform face area, we can actually get integration of both face emotion and face identity cues.  Work, particularly in monkeys, but also, increasingly, in humans, has shown that there is a sequence, there is the speed with which processing occurs, initially, the emotion bit is analysed, then the identity, and then there is some cross-checking a bit later on, with the emotion bit again.  So, with work, for example, we have also done with sheep, we have found that in fact the emotion that the individual face is showing is of paramount importance – it is the first thing that the animal, and we assume also the humans, focus on, before identity.  Clearly, you can think – it is actually more important to know whether whoever you are meeting is about to kill you than who they are, so it makes sense that the first thing you should do is to actually identify what emotion the face is expressing.

 

The face system also illustrates the brain as an interpreter.  Ignore this one for the minute.  This is the so-called Thatcher illusion.  If you just focus on these top two panels here, you can see that there is something wrong with Margaret’s face, but in fact, it is only when you turn it to be the right way up, and you can see that the eyes and the mouth are totally inverted. We do not expect to see the face like that, and this system is working, like many other aspects of the visual system. It is interpreting this complex image the way the brain expects to see it, as opposed to the way it actually is.

 

I promised you some sheep. The videos show sheep showing that they can discriminate between faces by pressing panels with their nose, and just in case you did not believe it, they get food every time they get it right, and the positioning of the face is randomised so it is not always left or always right, and they are extremely good at doing this. They can remember or discriminate up to about 50 different sheep faces and at least 10 human faces. They can probably do more than that, but that is as far as we went, and they can remember them for several years.

 

With these kind of animals, we are able to implant large numbers of electrodes into this face area, and this shows you a pseudo-colour representation of the firing of 240 odd neurons in both the left and the right hemisphere, and you can see that there are nice spatial clumps, if you like, of responses to the faces that are occurring, indicating that there is a degree of spatial encoding that’s going on across this large scale network for specific faces.

 

If you then plot what is going on across the 240 neurons, in a sort of 3D plot, we found that there is a difference between the representation of one face and another, even though it is quite small.  It is only about 10%.  This seems to be telling us - and there is also evidence in monkeys – that the way faces are encoded is not a sort of pure spatial code.  It is a population code, where all of the neurons in the system are actually encoding each face, but in a slightly different way, and you can distinguish one face from another by even a very small change in the pattern of the population response.  This is a very powerful way of encoding information.  It gets back to that spatial representation with the triangles. It meant that, within a very small space, just because you can actually manage to discriminate between one face and another with a very small shift in the representation of the population level, you can encode thousands of different faces that way, with a fairly limited population.  So it is a very powerful way of encoding information at a population level rather than individual cells.

 

We shall now go to a perhaps more complicated concept, but one that is becoming increasingly important for us, which is this temporal dimension again.  I have shown you that there are large scale changes going on across regions of the brain, and that they are integrated. However, we need to know what is coordinating them across large areas of the brain.  One of the key aspects that does this are these very important brain rhythms that perhaps many of you will have heard of in the context of sleep, the cortex undergoing different patterns of rhythmic activity as you go down into deep sleep, and then coming back into REM sleep. It now appears that these types of rhythms, which range from very slow electrical rhythms, the delta rhythm, down at about 2Hz, through to the high frequency gamma rhythm, which is about 30-120Hz, and this is generated by local neuronal circuits firing, whereas these slow rhythms are generated by long distance electrical activity going on throughout the brain.  For a while, these were just thought to be perhaps some kind of side effect of electrical activity of the brain, but it is now increasingly being shown that these are performing very important modulatory functions which allow us to understand how information can be coordinated across wide areas of the brain. 

 

There is now increasing evidence that slow and fast oscillations are actually coupled and perform different functions.  Theta, which is the 4-8Hz slow frequency, occurs in a highly synchronised way across large areas of the brain and is thought to be one of the important things for integrating in time information across wide areas of the brain. That is coupled with this high frequency gamma rhythm, which comes from the local neuronal circuits firing away, so that you end up with the gamma waves, which are obviously at much higher frequency, locked to a phase of theta. 

 

Interestingly enough, you can only get about seven cycles of gamma on top of each individual theta wave, and the idea is that each of these identified gamma waves is encoding a specific piece of information, and the fact that you can only get seven on top of a theta wave has led people to speculate that perhaps this is an explanation for one of the well-known psychological aspects of memory, which is the magic seven, plus or minus two, which is you can only keep in your mind, seven items of information at a time. This would actually provide – it is speculation, I hasten to add - a way of understanding why there is this limitation for how much information we can hold in memory at any one particular time.

 

The interesting thing though about these rhythms is that they do couple with neuronal activity, so this is theta activity – this is real data in fact from a sheep – and the firing of neurons occurs at the same phase of theta, at different phases depending on the neuron. There is thus a link between the activity of theta and the firing of neurons.

 

This shows that, during learning in the sheep, nearly three-quarters of the temporal cortex electrodes which are in this area show a link or a coupling between theta phase and gamma amplitude and that increases during or after learning.

 

This is just shown here.  This is the animal learning to get it right.  It starts off about 70%, and it is not until it gets to over 70% that we believe that it actually knows what it is doing, and you get an increase in the amplitude of the theta/gamma ratio as well, and there is an increase in this coherence between theta and gamma. The extents to which these changes occur are significantly and positively correlated to the animal’s behavioural discrimination performance.

 

We can generate network models which effectively define or produce exactly the same results as we get from recordings in the animals that generate these patterns of theta and gamma, and use these models to help us to understand how they modulate neuronal activity. The one thing we have found is that, as you increase the strength of the coupling between theta and gamma, that this leads to a progressive desynchronisation of neural firing in the network – that is, the neurons are not firing quite as much in time together as they used to. 

 

This, even without a change in firing rate in the neurons, will lead to an amplification of the response of a downstream neuron, which is an amazing sort of way of getting potentiation simply by changing the temporal parameters of the firing of neurons.  So you do not actually have to increase the gain; you just have to change the difference in the temporal pattern.

 

This goes on both in the model and in fact it also occurs in the temporal cortex.  So we went from the model, back to the brain, and found that the brain actually corresponded to the model – it shows this desynchronisation.

 

So why can you get desynchronisation alone producing some form of potentiation?  This is a very simple model.  All you need to know about this model is that it takes – these are the excitatory neurons inputting to a downstream neuron - two action potentials reaching this downstream neuron to provoke an output.

 

When you have synchronised patterns, the same number of action potentials, you only get three, in this model, action potentials produced, whereas, when you desynchronise them, you get five, and the reason for that, hopefully, is because there is cancelling out.  This only requires two action potentials to drive this neuron, and in some cases, there are three occurring at the same time, so one is wasted.  So you do not really want to synchronise information too much in terms of neuronal firing, because if they all arrive at the same time, at the downstream neuron, some of them will cancel out and they will not have any effect.

 

The other thing which you get when you desynchronise patterns like this across a network is that immediately you can see that, in time at least, this is a much more complex pattern than the synchronised version, so you are also generating more discriminable patterns by desynchronising.

 

This is showing calcium imaging, in fact, of activity of neurons in the hippocampus, and the only reason I am showing it is because, if you look at it carefully, either the top or the bottom, they are just different representations of the same thing.  The neurons are not synchronised.

 

Decorrelation, or desynchronisation, whichever way you want to put it, is a very powerful way also of reducing noise.  So these are theoretical experiments but, where you have got a sine wave, with a certain amount of added noise, if you negatively correlate the noise, only to a degree of about 0.01, you can cancel out the noise and leave the signal, whereas, if you actually positively correlate, you amplify the noise more than you do the signal.  This has actually been well-known for a lot of time.  This is the basis for the Central Limit Theorem in statistics, but it is an important principle, that it is actually better, in a lot of cases, to negatively correlate, or desynchronise, than to correlate.  Doing stuff together might seem like a great idea, but in fact, in many cases, doing things just slightly differently is better.

 

This just shows that, in turn, when you decorrelate you also actually expand the representation of information in physical space so you end up with a greater theoretical distance between elements of patterns, which makes them more discriminable.

 

We thought we could patent this and that this would be great way of cleaning up images or representing images using negative correlation.  We did patent it in fact, only to find out that almost at the same time as us, engineers had already patented this, and it is the most sophisticated way of reducing noise in systems that currently exists.  So it is nice to know that the brain does things exactly the way that our engineers have worked out is the best way of reducing noise in artificial systems, but the problem was that they actually did it before we did.

 

I am going to finish on the most difficult aspect for all of us. There are certain things we absolutely know. There is no single seat of consciousness in the brain.  It is not the pineal gland, as Descartes once thought, on the really reasonable assumption that there could not be more than one seat of consciousness in most structures. It happens there is only one pineal gland so he was wrong.

 

Many things, we already know, are processed without conscious awareness, and often, similar patterns of activation are seen when information is processed, with or without conscious awareness.  That has been quite confusing to us.

 

We know there are also different levels of consciousness and how to explain those, and individuals may be aware even when they show no obvious signs of consciousness. 

 

This is a recent example, which I think was quite shocking to many people.  It is a study that was actually carried out in Cambridge.  Here, they trained subjects during FMRI experiments, to either imagine themselves walking through their house - a spatial imagery that activates a particular part of the brain - or that they are watching someone playing tennis and that activates another part of the brain. It was a direct instruction to think about things, and that generated highly distinguishable patterns of activation in the brain.

 

What they found was, in the study, that 10% of patients who were in a defined “vegetative state” - they showed no obvious signs of having the ability to be consciously aware – were able to perform this task, with instructions.  So, just like control subjects, they showed activations in the motor area or the spatial area.

 

They then went on from this to say, well, okay, if we can get them to generate specific patterns of activation in their brains, we can get them to answer questions on the basis of, for example, motor imagery is yes and spatial imagery pattern is no, and they showed that they could do this.  So they asked questions…sort of “What’s your father’s name?  Is it Alexander – yes or no?” and “Do you have any brothers – yes or no?” and they were able to use this very simple way of feeding back information on the basis of whether they could generate a spatial or a motor imagery map to answer questions.  So they are capable, some of them anyway, of conscious awareness, even though they show absolutely no signs of them.

 

Other studies have tried to look at very basic changes in consciousness - for example, the emergence of a consciousness of thirst or a hunger for air, as a result of manipulations of your physiology. These have shown that you get increases in signals, particularly in some key areas of the brain like the cingulate, at the point at which they build up. They then got to their maximum at the point of which the subject reports that they are consciously aware of a raging thirst or they need to take oxygen.

 

Other studies, in which we have also been involved, have used anaesthesia or sleep as a way of trying to understand the difference between a conscious brain and an unconscious one.  These studies have shown, increasingly, that during a lack of consciousness due to anaesthesia or sleep, there seems to be a very large scale loss of cortical integration.  In rat studies for example, this was shown particularly by a weakening of the feedback pathways from the frontal cortex to the visual cortex at the back of the brain, so the normal strong feedback, in both direction, was broken down.

 

This is a human brain, where electrical stimulation is given to a particular area of the brain, while the subject is either awake or asleep, and the big difference is that when they are awake, the electrical stimulation causes an integrated change of activity not just where you stimulate the brain, but also all around it in a sequence.  So this cross is the maximum area of activation, and you can see it has moved well away from the circle, which is the area where the electrical stimulation is given.  However, when the same subjects are asleep, this does not happen.  Information does not seem to transmit around large areas of the brain.

 

We have done recordings in sheep, where we are recording electrical activity in three different structures of the brain, either when the animals are resting - they are not doing anything - or they are looking at face pictures, and we use a particular method of establishing connectivity which comes from economics.  It is called Granger causality, and it allows us to quantify the strength and direction of functional connections between structures.  What you can see is that there is a very nice unidirectional flow of information across these three different regions, the temporal cortex and also the cingulate, when they are in their control state.  It is in the reverse direction in fact when they are looking at face pictures.  However, when the same animals are anesthetised, there is a breakdown of this unidirectionality, there is no connection anymore between the left and the right hemispheres, and there is a sudden increase in the amount of connections within structures.  So it would appear that there is a kind of loss of an integrated flow of information across the brain when you lose consciousness. Instead, we have actually shown that there is also an increase in local processing. Perhaps what is happening is that there is a widespread and, during consciousness, integrated flow of activity in the near cortex, which generates a meta-representation, a representation of a representation.  The representation is the different nodes in the circuit responding to a particular, for example, set of stimuli. However, when you have consciousness, these are all linked up with a sort of coordinated, integrated, unidirectional flow of information across structures, which form this meta-representation on top of the physical representation of whatever it is. It is an idea, but once you have got beyond a representation of information to information being simultaneously involving a flow of information across wide areas of the brain, that that turns into this conscious meta-representation that we experience.  When the process or information is processed unconsciously, we do not form that meta-representation because there is a lack of integrated flow between the different cortical processing nodes. Indeed, you get an increase in information processing within these nodes in order to compensate for the fact that you do not have the ability to generate this meta-representation.  It is as though you are boosting the simple, automatic feedback loops within structures in order to process information because you do not have the ability to form a meta-representation caused by the coordinated activity across large areas of the brain.

 

This will also help us perhaps to understand why it is, when you see the same patterns of activation of structures of the brain, when you are consciously or unconsciously processing information, yes, the same structures are activated, but, when you are actually consciously experiencing whatever it is, they are all linked up in a flow, which causes the emergence of this meta-representation. 

What we need though, in order to really understand these things, and not just consciousness, is the ability to look at connectivity within the brain in much more detail.  This kind of technology or process that I talked about, using these causality algorithms, this is FMRI data now from humans, from 20 Chinese subjects in fact,  where we have the ability to draw connectivities in a resting state between 90 different regions of the brain.  This is going to be perhaps the most powerful way of unlocking what is actually going on in the brain, either conscious or unconscious, or during performance of one behaviour compared to another, when we start to be able to dissect out the functional connection changes that are occurring, not just which areas of the brain are activated or deactivated.

 

So, in future, that means for sure there have to be much stronger links between mathematicians, computer scientists and also of course neuroscientists.  There has to be a greater emphasis on revealing key functional changes in the brain, connectivity changes.  We need to provide a better understanding of temporal and patterning aspects of neural encoding, and we also of course, to help us still further, need further advances in technologies for measuring the activity, in real time, of the working brain.

 

Thank you very much.

 

©Professor Keith Kendrick, Gresham College 2010

This event was on Mon, 22 Nov 2010

professor keith kendrick

Professor Keith Kendrick

Professor of Physic

Professor Keith Kendrick is Systems and Behavioural Neuroscientist and was Gresham Professor of Physic between 2002 and 2006.

He has been a member of the...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.