Computational modeling of the brain

Neuroscientist Sylvain Baillet on the Human Brain Project, implementing the brain in silico, and neural networks

videos | November 22, 2016

If we want to talk about the computational modeling of the brain, we have to look at the notion of the brain as a computer. It is one perspective under question. In the notion of computational modeling of the brain, there is an idea that the brain can be identified as a computer or could be reduced as the computer. We can discuss that philosophically and technically, but if we simplify things and look at the computer, there is a hardware part and a software.

I think it is not an exaggeration to say that the hardware part of the brain is relatively well understood and we understand, for instance, the architecture of the organization of the brain as the structure, as different elements at different scales. In this respect, there is a great deal of efforts in research these days of implementing this architecture using simulations on software and also the actual implementation on computer chips, which architecture is actually that of elementary assemblies of neural cells. We call these approaches neuromorphic in a sense that you really want to proceed to implementing an architecture in a substrate that is not biological, that really mimics the brain. And then you could hope that by using this architecture you would learn about brain function. Or you would obtain results that would be at least equivalent in performance to that of the brain. The performance both in terms of computational power which is, I think, we can agree, quite high for the brain, and also in terms of efficiency.

Neuroscientist Sylvain Baillet on CT scanning, the way MRI works, and other techniques that revolutionized medicine
And it is even the greatest attraction, because to give you an idea, in terms of computational power reduced to the number of operations that brain is able to perform, first, we need to look at the number of little components that can perform operations. And of course we think about neurons. Typically, in the human brain there are around 100 billion neurons. But it is not only the neurons that perform the computation. You can even reduce the elementary computations and operations to that of the synapses, and every neuron is typically interconnected with other neurons in the order of 1000 to 10000 synapses. So you multiply 100 billion by about a 1000 or 10000 and end up with a quadrillion of elementary operations that can be performed by the brain and at certain moment. The brain is extremely dynamic and can perform anywhere between 10 to 100 operations per second. So we are looking at the capacity of the brain as the computer of around 1 to 10 pentaflops. It is 10^15 elementary operations per second. Today’s high-performance computers are barely reaching or just recently surpassed that capacity of performing elementary operations per second. The world record is around 50-55 petaflops, on the latest supercomputer.

It is not so much of an issue of how many operations can be performed, but it is a matter of efficiency in terms of energy consumed. If you look at these high-performance computers and supercomputers, like the one I was alluding to, I think it requires the energy supply in the amount of about one-two, five megaWatts. It is completely crazy when you compare the efficiency of the brain to the same amount of operations, in theory, at least in terms of the capacity for operations. Human brain can require an energy supply that is equivalent to a light bulb of 10-20 Watts. We are looking at the things that are tremendously different in terms of efficiency, which makes it very puzzling for computer scientists to try to mimic this extremely powerful object that is able to perform so much elementary operations at once, with a very limited supply of energy. In this respect it’s fascinating and it’s one trend of research in the computational modeling of the brain. It is actually trying to use the brain as a model for the computer. So it is like two sides of the same coin.

Whether we are going to learn about the brain function by implementing the brain in silico, that’s a different question, and I guess may be yes. But as I said, one of the motivation to develop these neuromorphic computing solutions is to reach greater efficiency in terms of energy needs. Then, if these solutions exist and are available, then I guess a neuroscientist may use those brainy computers as models in that they can observe and that can implement or test hypotheses about brain function or dysfunction. You can look at the way the functions of the brain in silico would be altered by modifying some of the parameters and understanding whether that would lead to be pseudobehaviors that are observed in some patients, for instance. You could also implement models for brain repair as well. That makes it very attractive and an alternative for current practices in biological research which are relatively limited in terms of testing this kind of hypothesis, because you have to look at the animal models, which are imperfect. Also, you have to look at patients. The different solutions to treat a given patients are relatively, I would say, limited because you are dealing with the person and not a computer and you don’t want to make mistakes obviously.

So that’s one way of doing things for modeling the brain as a computer. This is a hardware portion. But there is also a software portion that is also an object to an active research everywhere. There is for instance, the Human Brain Project in the EU that has triggered a lot of interest and which one of the deliverables and the objectives is to implement a software version of the brain where at this time there is no implementation in silico, but more as a software program, that would basically model every single cell, may be not in the brain but in some brain regions, for instance, the olfactory system or somatosensory system of a rodent, that was published actually last year, for instance. The approach they are taking is actually to mimic or to implement equations, if you will, for each and every single cell. The way these different cells interact also is modeled with some software modules. And basically you have a supercomputer running this software and you can also proceed the same way as I was describing before by observing the end product of this brain activity that emerges spontaneously or is altered by pseudostimulus that you can also model with software. This is also very interesting, very flexible and also opens great a perspective in terms, again, of modelling brain functions and dysfunctions. It remains uncertain whether this can scale up to the dimension and complexity of the whole brain and whether we are going to learn how the brain implements function and behavior – this is definitely a very active field of research in neuroscience.

Professor Ilya Nemenman on machine learning, the laws of biology, and the quest for a 'robot-scientist'
There is also yet another side of these computational aspects that relate to brain function and brain activity and this is a bridge between machine learning techniques that have been exploding over the past 5-10 years. It’s very interesting to see a little bit of history of how these techniques have developed. I’ve gone through a Renaissance recently, because back in 1970’s and early 80’s there were these pioneers in computer science and mathematics who’ve looked at basic models of neural networks and the mathematical formalization of these networks. This research has been kind of suspended and didn’t get too much attraction in the industry and the rest of neuroscience community because for these networks to perform properly in terms of classifying images or translating natural language or even the capacity of learning for these networks, they were limited by two factors back in 1980’s. The first factor was a limited power of computers or, if you wanted more computational power, it was very difficult to access these resources, which is not the case anymore. And the second aspect was that for training these networks you had to have a lot of data available to you, like thousands and thousands of images, for the network to be able to classify the different elements in the picture – face, animal or even higher-order categories. This is not the case anymore because data is out there and huge databases of, you know, classified images, but also some bytes and also other objects of interest are readily available on the internet and have been created over the years.

So, today we are looking at this revolution of machine learning and how it can penetrate the industry and consumer goods. But the question is, is it really a translational aspect of neuroscience or are we going to learn about how the brain works with machine learning? As of today, I don’t think that’s the case. I think in that respect, the implementation of machine learning is remarkable and is full of promises, but also poses some societal and ethical challenges. If you look just at the scientific portion of it, you would hope that by observing these neuronal networks in action by performing classification on series of images or translating natural languages, you would learn how the brain does that. And that’s not the case because although the architecture of the software mimics that of the brain networks, yet the mechanisms by how these software bits realize this function is not clear, meaning that it is very hard to generalize and understand the mechanisms that have been implemented by the network to perform with high performance a given the task. In this respect you don’t have the insight of looking at the implementation of a given function and therefore you cannot bridge that with the actual brain activity that may be observed in humans. So, taking together, I mean, with the emergence of new mathematical tools, the immediate access to huge amounts of data, computer resources – all those elements are in place to, basically, approach this fascinating question of a better understanding the brain activity, brain function, brain dysfunction, with new methods and new resources that we didn’t have even recently.

Professor of Neurology, Neurosurgery & Biomedical Engineering, MNI Killam and FRQS Senior Scholar and Interim Director, McConnell Brain Imaging Centre, McGill University.
Did you like it? Share it with your friends!
    Published items
    To be published soon

    Most viewed

  • 1
    Greg Towers
  • 2
    Antonia Hamilton
  • 3
    Tim Spector
  • 4
    Michael Thomas
  • 5
    Sophie Scott
  • 6
    Joanna D. Haigh
  • 7
    Neil Burgess
  • 8
    Greg Towers
  • 9
    Antonia Hamilton
  • New