Skip to main content

Advantages and disadvantages of edge computing and cloud computing

Google cloud services
(Image credit: Shutterstock)

The growing number of smart devices is giving edge computing an edge but does that mean that conventional cloud data centres will become decentralised and the need for the centralised cloud will disappear in the future?

 Usage of cloud services has grown rapidly as it allows companies to get more processing power and storage, while only paying when this is needed, and at a lower cost instead of companies spending on their own for computing hardware. On the other side, cloud providers are looking for chances to optimise their investment.

CPUs and storage devices have been growing their performance exponentially while becoming smaller and smaller, a low-level smartphone is more powerful than a computer which was filling a full room in the 80s.

With more powerful devices in the hands of consumers (laptops, smartphones, etc.) and in general on the edge (smart IoT devices), it is more and more convenient to offload part of the computational activity to these devices.

This is allowing the cloud provider to reduce the need for computational resources and providing the users with a faster and more personalised experience, as only a minimum amount of data is transferred over the network.

Edge computing is useful to service providers, as it uses spare capacity from the users’ devices instead of cloud computing power, and speeds up the user experience as less data needs to be transmitted in the network.

On the other side, it doesn’t guarantee enough trust or security, as the device can be tampered by the user with little control from the service provider perspective.

What does the future of cloud look like?

Simone Vernacchia, partner and head of digital, cybersecurity resilience and infrastructure at PwC Middle East, told TechRadar Middle East that cloud computing will not disappear with edge computing gaining traction.

 “Edge computing has a lot of promises and, at the same time, a lot of potential problems. Whenever you need real processing power, you will rather use the cloud. The two technologies are somewhat complementary,” he said.

 On one side, he said that if you want to use processing power from the cloud, you need to transfer all the data to the cloud, wait for the elaborations to happen, and then receive the data back.

“This will introduce delays because of two different phenomena - the first one is that bandwidth is not infinite and can saturate if we want to perform computation on a large amount of data and the second one is latency,” he said.

“In a real network, we can try to have a cloud provider that is closer but we cannot go faster than the speed of light (light is actually how bits are transmitted at the physical level in high-speed networks),” he said.

The moment you put a network in-between your screen and the keyboard, he said that you need to use a pipe for everything you do.

Will latency attain speed of light?

“Whatever you do has to pass through a geographical network. The geographical network has two things – bandwidth is limited as the technology today cannot scale beyond a certain gigabyte and the second is latency which means that latency can achieve a maximum is the speed of light and not faster than that. We are far away from the latency to reach the speed of light,” Vernacchia said.

 Moreover, he said that latency is a way more difficult problem to solve from a physical perspective as well as from a protocol perspective but it will get better and better but not attain the speed of light.

 For any network traffic to be transmitted, there are different layers of protocols.

A protocol is a set of rules which is organising information into packages and controlling that they are not affected by errors introduced by the transmission over the network.

Vernacchia said that protocols consume data just to shape packets and make sure they arrive integer or are corrected once they arrive.

“This overhead is introducing additional delay, as it is consuming bandwidth. In order to reduce the amount of data they need to transport from one side to the other, service providers are trying to shift more and more computation to the edge devices, thus saving cost and providing a faster and more fluid user experience,” he said.

The evolution of the computer to the data centre to cloud computing has been pushing the whole processing power farther and farther from the user and it is putting more strain on the network.

 “As more users are using the network, we need more bandwidth and latency. Edge computing is trying to solve this problem by doing a lot of computation on your device to reduce latency,” he said.

 The advantage of the edge, he said is that you pay less to a cloud provider but the problem is that the chip on the device may not be powerful enough to do everything and you may have to rely on the cloud because the power and the storage you can get is way wider than on the edge device; and as the cloud provider does not have control over your device, the result may not be trusted if you tamper the device.

Are data centres needed in each country?

Vernacchia said yes, or at least you want to have dedicated entry points for high-speed network connections close to your country in order to minimise latency and maximise bandwidth.

 “Moreover, cloud providers will move the digital data as much as they can to be close to you to optimise it and in a way which is invisible to (and out of the control of) the user. This is known as 'cashing' in a bid to provide the shortest latency available. In other cases, cloud providers will (silently) move user's data from one country to another in which they have more storage or cheaper storage” he said.

 In the future, he said that cloud providers will try to push as much as they can to the edge to cut cost and where security is not critical.

 “Edge computing is happening now on the edge and consumers get a better experience,” he said.

 For example, in the case of heavy video games having a lot of visual effects, in order to reduce required bandwidth and make the experience better for the user, the processing is more and more done on the device rather than on the cloud. On the other side, in order to preserve security, when the user wants to make an in-app purchase, that is settled in the cloud.

Security is an issue on an edge 

“What is using edge computing in applications which requires low-security requirements and what is using the cloud is that things which need to be controlled and which require high-security requirements,” Vernacchia said.

 However, he said that security is an issue on the edge as the device is owned and in the hands of the user, who can tamper with it without service providers being able to avoid it.

 “If you jailbreak your smartphone, although you may breach some license agreements, it is very difficult to enforce penalties. On the other side, if you breach into a cloud provider, prosecuting you is relatively more feasible” he said.

 “When you want performance, it is a possible way to look at whether a cloud provider has a data centre in your country but for most of the companies, performance is only one part of the story. For some others, latency is the other part and for others, it is security and data compliance. Some customers would like to have a data centre in Europe than in the US, as privacy regulations are more protected,” he said.