Log in Sign up
deepTech

Cloud Computing

To a certain extent, people might say there’s nothing new about cloud computing. It started off many years ago, perhaps in the 1970s, with something we used to call ‘time-sharing systems’. What was the idea? You had a large computer, and you had many users that were sharing this computer, which was not just sharing the computation but also sharing the data, information, and examinations, for instance. These time-sharing systems were very popular in universities because they allowed you to manage a large number of students through the system by, for instance, offering them the same kind of services and the same kind of structure. For instance, if you are going to mark an exam, you would take the same file about the student and modify the same file rather than write a separate piece of paper for each student or wrap a separate computer record for each course and for each student. So, this idea of putting things together and sharing them in a computing environment is something that goes back to the 1970s.

However, it came back very strongly, much more recently, for a couple of reasons. One reason is that, of course, through the internet, we can access a lot of things without knowing what they are and where they are. For instance, when you do a Google search from your mobile phone, you are going through the wireless mobile network, then you’re going to something called a base station, and then the base station will connect you to some computer where there is this Google database or Yandex database. So if you go in there, you don’t know where it is, you don’t even know which computer is supporting it, you have no idea – but you’re using it, it’s perfectly reliable, and you are using exactly cloud computing.

Cloud Technology Researcher José Luis Vázquez-Poletti on data storage in computing clouds, creating new virtual resources, and the costs of using computing clouds
The idea of cloud computing used from your mobile phone is also ported to a more global level where you’re dealing with the data associated with enterprises, different companies that have the same functions, they have to pay people, they have to do human resource functions, they have to do promotions etc. All these things are standard, and rather than try to do them all by themselves, they will have some large cloud server, a cloud service that gives them all of these services. They don’t need to know where the computers are, who is managing them, or how they run as long as their data is secure and as long as they get things done quickly enough. So, this is the birth of cloud computing.

Cloud computing, though, has a certain number of inconveniences and some bad sides: there are the good sides, and there are some bad sides. I’ve mentioned security: security is a very important thing because you don’t want your data in the same system, using similar programs, to move, for instance, to your competitor, to another company that works in exactly the same market as you are and which may be wanting your data to find out who your customers are or what kind of business policies you have which make you successful or not. You don’t want the sharing of a lot of these things despite the cloud. This is one problem.

The second problem with a cloud is energy. As you concentrate a lot of things on these very large machines, you’re turning towards infrastructure machines that have megawatts of electricity consumption. They use this electricity all the time; even when they’re not very busy, the electricity is being used, so you have a huge electricity consumption in these very large centres. They are not only using electricity to compute and transfer data inside the system, but they’re also using electricity to cool the machines down. So, if you spend 1 to make the thing work, you may be spending 0.5 to cool it down. This high concentration of computation means a high concentration of electricity, heat, and cooling.

So you have two big problems that are arising: one is a security issue, the other is the energy consumption issue.

One of the biggest energy consumers in the world is Google, obviously, and Amazon, which have large cloud servers. So you have the good and nice things about this, and then you have the bad things.

With respect to this, the research that we have is largely to mitigate to get away from the bad sides. Today, we don’t talk about the cloud anymore; we talk about the cloud, the fog and the edge. For instance, some computations may be easy to do on your mobile phone, which has very low energy consumption: this is the edge. Another part of the computation may go to my computer in my office: there are thousands of them, millions of them, and we don’t know where they are: this is the fog. They have very low power consumption because any one of these will put itself to sleep, and it’s okay if it puts itself to sleep. Why is it not okay if the cloud server puts itself to sleep? Because the requests are coming all the time, it cannot put itself to sleep; it cannot say, ‘Stop! I’m sleeping, I’ll wake up in ten seconds’ or ‘I’ll wake up in one minute’. It’s not possible; you have to have something which is always ready and always doing the maximum work while these can put themselves to sleep so you can wake them up. So that is a fog. And then you have the cloud itself.

So now we have the edge, the fog and the cloud. This is a current research topic. In fact, as we speak, I’m involved in the preparation of a new research project on managing these edges, fog, and clouds. How do you dynamically move work in the system so that you optimize, on the one hand, the services it gives that are quantified as statistical, probabilistic quantities, and how do you also minimize the energy the system consumes as a whole so that you do not get to the situation where you have these huge energy consumers and continues growth in energy?

I mentioned energy: energy is the major problem of computation today. Today, if you take the whole energy consumed by information technology, it is equal to the total energy consumed by Germany plus Japan, two major industrial powers. So there’s a huge amount of energy consumption: it’s much more than aeroplanes’ CO2 impact, much more than a lot of other areas’ impact. And it’s an area where the amount of energy consumed is constantly increasing. So, that is a huge challenge. The new solutions, the new research solutions, will have to address this in a very, very priority manner, looking at how you distribute work in this whole system so that you minimize the amount of energy you consume and also offer an accepted level of service.

Computer scientist David Clark on the user's role in ensuring internet resilience, the policy of communications service providers, and technological problems the industry now faces
With regard to the accepted level of service, there is a commercial side to it, which one calls ‘service level agreements’. You say to your cloud server: I’m giving you this work to do; I will pay you as long as the time it takes to do it will be less than something. If the time is above this, then there will be a discount. And then there will be another threshold, and if it’s above the other threshold that it’s not me who pays you, it’s you who pays me. So, these service level agreements are very important, and they’re also connected to the energy consumed because energy is what the cloud server, the fog server, or the edge server are actually paying for. This energy is one of their major expenses; they pay more for energy than for personnel for staff to run the systems. So, the mathematical cost functions have an economic form which combines the income generated or the loss generated by the cloud system and on the other hand, the energy expenditures of the cloud system. These come together in an optimization problem.

So, we have been working on these questions for a number of years now, and there are scientific publications that deal with this, such as the IEEE Transactions on Cloud Computing. Research in this area is carried out across the world. It’s also tied to high-performance computing – supercomputing.

The machines that sit behind a cloud server have to be very very high-performance machines since they will be dealing with very large amounts of work that will arrive at the same time.

In addition, there are very interesting networking research problems because these cloud servers are housed in data centres. These data centres are composed of large numbers of high-performance machines, and they have to interconnect with each other at very high speeds in order to be able to transfer data. For instance, a job comes here, and it needs data that was there; it has to be transferred quickly, and that data has been worked upon prior to this particular step by another machine… So you have special network designs: for instance, we’ve done research with a major company called Huawei on the design of networks for data centres.

So we see that this notion of a cloud has now evolved into cloud, fog and edge because of technology, of course, but also because of the energy costs, in particular operating clouds. We see that there is a big optimization problem, which includes the costs of operating the system, which is essentially energy and the costs associated with the income that the workload can generate in these systems. These things have to be optimized together. The energy issue is a major challenge, and the networks that support these systems are also a major challenge because with very large amounts of data that are going through the cloud and fog, you cannot just rely on slow internet-type connections, but you must have gigabytes of data that goes through networks that inter interconnect these data centres.

Become a Patron!

Support our cause Serious Science is a team of creators that are passionate about knowledge.
By donating to Serious Science, you enable us to continue producing and sharing free, high-quality educational content and expand our collaborations with top experts and institutions.
Donate through Patreon
Яндекс.Метрика Яндекс.Метрика