Computational Modeling of the Brain
Neuroscientist Sylvain Baillet on the Human Brain Project, implementing the brain in silico, and neural networ...
Visual perspective taking is the need to consider what another person can see from a different point of view. When my children have breakfast like they do every morning, then we have a very busy breakfast table and a child sitting here and a child sitting here, and there is the bowl of sugar and a jug of milk, and there’s a big packet of cornflakes. Depending on how you put the pack of cornflakes, then maybe one person can see that the bowl of sugar, but the other but can’t see the jug of milk, and the other person maybe they can see the jug, but they can’t see the handle, because the box is hiding. A visual perspective taking is all about working out what other people can see from different points of view.
It sounds like a very simple thing, but it’s something that is very fundamental to our social interaction with other people, because if you can track what other people can see, you’ve then got a very good idea about what other people know. It comes up in the child’s game of hide-and-seek, that one child wants to hide and can do that effectively, so that your friends can’t find you. But in a more ecological context, hide-and-seek could come up many times in combat, in hunting, in times when people want to deceive each other. If you can track where other people are and what they can see, then you would be able to deceive them and steal the thing that they’ve hunted, steal the fruit from their garden, or if you’re in a war or battle situation, you would be able to attack the other person with a great advantage as they don’t know that you’re coming.
Being able to just track what other people can see is a very useful skill for many different types of social interaction. There’s a variety of interesting types of research that try to look at this and understand what is it that lets us understand track other people are visual access, what they can see. It’s also very interesting because what somebody can see or feel equals to what they know, which is called the theory of mind. A theory of mind is something that since 1978 has been a central theme of social psychology, social neuroscience. That’s about understanding that other people may have a different belief to you.
So, another person could have a false belief. I could have, when I left home this morning, think that the jug of milk in my fridge was full, but then now that I’m here, my children have come home, and they’ve finished all of the milk, so I believe the jug is full, but they believe, they know that the drug is empty. I have a false belief – they haven’t told me yet that I have to buy some more milk on the way home, because otherwise I won’t have anything for my breakfast tomorrow. I might have a false belief about state or something in the world. Being able to understand that other people have false beliefs seems to be one of these core things that may differentiate people from non-human primates. We think that there’s been a lot of philosophizing about how it is that we could have this type of understanding of false belief.
That’s kind of something that perspective taking leads onto, but we’re turning just to the core topic of perspective taking. There’s a variety of different ways to look at perspective taking. In particular it has been a very important distinction made between level one perspective taking and level two perspective taking.
Level one of perspective taking is just what can you see: can you see this object or not? If I had a barrier here then I could see the ring on my finger, I could see the thing on the front of the barrier – is the thing visible or is it hidden? Level two perspective taking is a bit more advanced. That’s the question of what does it look like from your point of view. If there was a teapot here on the table, then depending on how I turn the teapot – either I can see the handle teapot, but I can’t see spout, or I turn it, so I can see both the handle and respect. Somebody sitting at the other side of the table would have a different view of that teapot to be. It turns out that that level two perspective taking, understanding what something looks like from different point of view, is a much more advanced skill than level one.
For example, we have looked at children with autism, and we know they have difficulties with theory of mind, we know their difficulties with many kinds of social interaction. Many studies have shown that these children are quite good at doing that level one perspective taking – they can work out what another person can see, but they find it much harder to do level two perspective-taking, and to understand that the same object made up different for the different people sitting on different sides of the table, and have a different point of view on that object. So, by doing these kind of studies, we can start to separate which parts of social cognition are easy for children with autism, which parts are difficult, and to also say something about the different kinds of mechanism that underline these different aspects of perspective taking.
Some of the questions that are very important in the area of perspective taking at the moment are firstly developmental – at what age some children do this for a long time. Up until 10-15 years ago it was always believed that these tasks are very difficult for any children under 4 or 5 years old, so that toddlers, babies had no idea about things like visual perspectives and theory of mind. But in the last 10 years there has been an increasing amount of evidence showing really remarkably sophisticated abilities to make sense of perspective and theory of mind, and other people’s points of view in really very young babies.
It remains quite controversial. People still looking for ways to pin this down, and say how sophisticated are young infants in their ability to understand what other people and what other people can see. Whereas the transition from the kind of understanding that infants have to the kind of understanding that four-year-olds or adults have, is there a sudden step change in this understanding, or do infants really have the same type as adults, that’s one very active area of research.
The other question people are very interested in is how much do adults do this stuff automatically. Do we have to think about it every time we want to invention what somebody else can see, that requires cognitive effort and will be quite a challenging process, if you’re not paying attention, you get it wrong? Or maybe some people suggest mechanisms that are not going to do it just like that, without you having to think about it at all. It just gives you the answer. If there’s that kind of automatic mechanism that just gives you the answer without you even knowing it, then that tells us how it works in the brain, that it must be a much simpler process and a much more robust process. But again, maybe more limited in terms of flexibility, if you are trying to keep track of two or three people, because you’re in a game of hide-and-seek and you’re trying to get passed somewhere without them seeing you, then that will become a much more challenging operation.
Where those boundary lines are between what you can do more semantically without thinking about it? What things need you to be really engaged in working? It’s a very important question. Are those two things really the same mechanisms or are they two different mechanisms that you’re using in quite different ways?
We love to use things like neuroimaging to look at this, but it’s quite challenging to do some of these studies in a brain scanner, brain MRI. Brain scanner is a small dark noisy tube, and you have to lie very still in there, you can’t move your head around, and you can really press a couple of little buttons. Things like perspective taking really require you to be looking around real world, thinking about what’s going on in the world, and you can what from where. So, it’s quite hard to do that in a small noisy channel, where you have to stay still.
This is where we’re using new technologies like virtual reality to try to bring these things into neuroimaging. We’re also using newer energy methods, like functional spectroscopy, that allows us to record brain activity while people walk around and engage in social tasks. These new methods and new techniques are going to be something that shows us a lot more than what’s going on in some of these very fundamental brain processes.
The future of the field of perspective taking is going to be this developmental question. There’s the question of what’s different in adults. There’s also an increasing amount of work looking at scale of like perspective taking in non-human primates, and various forms of apes, and macaque monkeys, to try and pin down what they can do and can’t do in this area, which remains very controversial. I think increasingly we’re going to also be using some of these things to think about how we can build artificial systems – either robots, or virtual reality characters, which are able to take perspectives – and how much they need to be able to use that kind of facility in order to have sensible social interactions with people. If your sat-nav is been able to have some idea of what you can and can’t see, then it might be able to give you much better directions and stop your getting lost in quite the same way.
Neuroscientist Sylvain Baillet on the Human Brain Project, implementing the brain in silico, and neural networ...
Robert Plomin on IQ tests, genetic predispositions to higher or lower intelligence, and the Flynn Effect
Neuroscientist Neil Burgess on the discovery of place cells, spatial memory, and experiments with functional n...