Spectroscopy in Graphene
Nanotechnologist Andrea Ferrari on inelastic light scattering, Hall bars, and Cambridge Nanosystems
The history of AI is based on two aspects. One of them is adaptation. In an old book from the late 1940s called ‘Automata Studies’ which was published by the Institute for Advanced Studies at Princeton University, there is a paper by William Ross Ashby talking about adaptation and self-adaptational computers. That is one line of artificial intelligence. Another line of artificial intelligence is based on logic: if you can express certain concepts logically, then you can deduct their consequences, and you can deduce their consequences. Once the logic is defined and the rules are defined, then when your artificial intelligence program comes in a new situation, it applies its logic to the situation in order to decide what should be done or what it must do. These are the two traditional areas.
As we have had more and more complex computers and especially as we’ve had the Internet, which seems to regulate itself, to run itself (which, of course, is not completely true but we have this feeling that there is a complete system running independently of us), we ask ourselves, is there some form of intelligence which is based on simply local observation and local decision without being explicitly logical or explicitly designed in advance? This is the subject of the area of self-aware networks.
As you look at things, you can think of two obvious areas of application. One is economics, where you have millions, billions of agents which are interacting according to their own rules, and then prices form, and certain things happen; systems are often stable, but sometimes there are catastrophes. The other example, of course, is the Internet, where you have millions of computers that are actually managing it, that is at the edge: just the ones that are managing the Internet, the computers that we call routers which are exchanging information and decisions between each other and which are maintaining a stable and running system. These are two examples that we think about the economy on the one hand and the Internet on the other hand.
The methods that we use to design the Internet are based on probability, stochastic processes, based on queueing theory. These methods are giving us fairly stable predictions within the usual tolerance levels accepted by physics, for instance, or by science. We do have mathematical methods.
So if we have these systems which we don’t actually design top-down anymore, can we let them design themselves bottom-up?
And this is the concern for self-aware networks in the more technical, technological context.
In this case, what we have been doing is designing the way packets travel on the Internet. What’s a packet? A packet is a basic unit of data which contains information and which is travelling independently inside the Internet. A packet has a source node, so if you look at a packet, you will see the name of the node that sent it, you will see the node of its destination, and the node that should receive it (when I say the name, it’s a numerical identifier), so there’s that information. Then there’s this content of the data, and then the packet moves autonomously in the network, except that in the Internet of today, the path is predetermined.
What’s wrong with that? Why aren’t we happy with that? Well, we’re not happy with the path being predetermined because the conditions in the network are changing. How have we predetermined it? Usually, we predetermine the path based on the number of nodes that it will visit; that is, we say, oh, better visit fewer nodes than more, try to make a path through the minimum number of nodes. However, when the conditions of the network change, for instance, when there is more traffic on certain parts of the network, then certain paths which are short in number of nodes can be very long in delay. They can be very bad because they are causing losses of the traffic. So we have to adapt; the network should be self-aware and should be able to control itself so that it is able to use the best paths all the time and not necessarily the shortest path. So you can now apply the concept of self-awareness to the Internet and ask it to be aware of its own situation, self-aware, and to modify the way it’s moving packets so that the best outcome is being obtained.
How do we do this? We do this through three kinds of packets. The first kind of packet is the one that’s carrying the data. They have to be there, but we don’t expect them to make the decisions themselves: we will make the decisions with the help of smart packets. Smart packets are a minority: we don’t want to add too much additional work inside the network just to have self-awareness, so we have a minority of packets whose role is to find out what’s going on. We call them smart packets. Then, we have to bring back the information that’s been discovered by the smart packets to the source that will have to decide how to send its traffic: these packets will be called ACK packets, acknowledgement packets. So we have the smart packets that look for the good path, they check how things are going, and we have ACK packets that bring back this information to the source, and then the source then says to those packets, please, go this way. But this advice will change all the time as a of what the ACK packets are bringing back as information.
How do you collect this information? We collect this information for the smart packets with the help of Oracles. What’s an Oracle? Well, you’ve all heard about the Oracle of Delphi in Greece, which existed in antiquity: we’re not talking about that one. The Oracles are actually random neural networks sitting in each of the nodes. This neural network collects information coming from the smart packets, collects information coming from the ACKs, and, based on this, makes a reinforcement learning-based decision to improve the next step of the packets. So they are saying, well, next time, better do this, it’s going to work better. So the smart packets are going around trying to find out things; the Oracles, which are random neural networks are sitting at the nodes are modifying the paths; and then the resulting information is brought back to the source that gives the instructions to the next stream of payload, standard packets carrying the data.
If you do this, you must have something we call an objective or goal function. The goal function is a mathematical representation of what this network is trying to do well. This network may be trying to do better things through self-awareness for different reasons. One can be, for instance, to make things as fast as possible so that the packets arrive on average as quickly as possible. They can do this to avoid loss of packets, they can do this to reduce energy consumption in the Internet, which is huge, and it’s by far the largest expenditure on electricity for operating the Internet, which can also have a CO2 impact, of course. So the self-aware network using self-awareness can have different objectives, and the objective can be energy, can be delayed, it can certainly be the loss of packets, it can be a mix of these things. It can have a goal function which mixes these things and tries to make a compromise between them. This goal function is actually given to each of the neural network Oracles: the random neural network Oracles are aware of the goal function, and they modify decisions so that they do the best with respect to the particular goal function that the network wants to apply.
This example of a system is actually something that’s been implemented and that is running in various specific contexts including in protection against network attacks, so it’s also been used in cybersecurity, and to route traffic so that you avoid danger and you achieve your objectives of reaching the destination quickly.
In conclusion, self-aware networks or self-aware systems are a third way to advance in artificial intelligence, the idea being that within a system, we incorporate the capacity to self-observe, self-measure, and optimize its own objective function and not an objective function or a method or a logic that it is imposed from the outside. This has, of course, a lot of open questions; often, you can post these types of ideas and these types of questions in restricted context: for instance, context like networks, like the Internet. You can do it to a certain extent in systems of agents, and we did this a few years ago for emergency management. For instance, if you have a self-aware system whose objective is to save as many lives as possible, how does it act in the presence of an emergency such as a fire? And not just one fire but the fire here, an accident there, some other thing happening somewhere else? It has to dispatch, for instance, ambulances, firefighting equipment and other resources to these different places simultaneously. How do you do that? We looked at the application of self-aware systems to this particular field in a large project, made some offers, and even took some patents with the industry. The industry adopted some of our recommendations for these kinds of applications.
So, you can apply it in these contexts when the system becomes very complex in the sense that it uses many different forms of physical entities. The question of self-awareness and self-organization becomes much more difficult. Another area where it’s much more difficult is, for instance, suppose you are trying to organize flights above the Atlantic in this manner: you have huge safety issues that come up. So the question then is, can I allow myself to use these new concepts in a context where there are such huge issues of safety, reliability and regulation?
Nanotechnologist Andrea Ferrari on inelastic light scattering, Hall bars, and Cambridge Nanosystems
Cosmologist Martin Rees on the history of astronomy, first nanoseconds of the universe and the theory of multi...
Nanotechnologist Andrea Ferrari on the properties of Graphene, its significant history, and how to send it fro...