Edge processing seems to be the new hotness. Combined with the Internet of Things (IoT), edge processing is a vision of smart devices reporting data to local compute resources where real-time reaction and control occurs. There may be local analytics—possibly with artificial intelligence (AI), to hit all the buzzwords—to provide local control of something and then pass insights on to a data center or public cloud–based business application for longer-term management. This is the edge to core to cloud message that I heard from HPE at its Discover conference in Madrid in November.
Disclaimer: I attended Discover as HPE’s guest. HPE paid for my airfare and accommodations as well as some nice meals. HPE did not request or review this post and is not compensating me for this post. HPE is also a client of mine for vBrownBag Build Day Live events.
The Edge
The idea of edge computing is that sensor data needs rapid reactions and millisecond-range response times. If the sensor is in an autonomous car, then we cannot wait for cloud latencies to identify that what the camera sees is a cyclist swerving in front of the car. In the same way, if the sensor is on a pump in an industrial chemical plant, then we still need to know whether the pump has failed when our Internet connection is offline. The expansion of sensors in smart devices means that data is proliferating outside of data centers, often a long way from data centers. This data has some real-time value and some longer-term value. The real-time value is extracted at the edge, slowing the car down to avoid hitting the cyclist. The longer-term value is also refined at the edge and sent on to the core. In particular, there is a whole lot of real-time data that simply tells the control system that nothing is going wrong. There is a lot of repetition in the sensor data, information that does not need to be sent to a central location for analysis. The edge-generated data needs to be refined there at the edge before it is worth sending over the network for further analysis. The video of that swerving cyclist may be very useful, but the video of being stationary in a queue of traffic is less useful. Most of the industrial edge computing is augmenting or replacing manufacturing process automation, which has historically been done with programmable logic controllers (PLCs) and has not been connected to the business IT systems. There needs to be an instant operational reaction at the edge.
The Core
The core is your on-site data center, although you may choose to place these functions in the public cloud. It is where normal data processing happens and where business insight is derived over minutes and hours. It may be that the video of the cyclist is used to analyze the function of the self-driving car and the driver, and then to tweak the behavior of the car to better suit the driver. This analytics may be beyond the capability of the computer in the car and need to happen in the core. It might be that some analysis of the trend of the temperature of a pump in the chemical plant identifies that a particular manufacturing process is putting excess strain on the pump motor. Edge data sent to the core is used for business insight and immediate action.
The Cloud
A public cloud service is a great place to put transitory workloads and to store things that will grow over time. Even better is where there is a lot of data that will require occasional analysis. Imagine how much data there is in videos of every time that a self-driving car encountered a cyclist. Then think about how this much data will be used to improve the safety of self-driving cars. Next, imagine that this data is used to help design better road infrastructure so that cyclists are safer when they share the road with self-driving cars. Massive amounts of analytics make for a better environment in ways that were not previously possible.
Hybrid
The view of edge to core to cloud is mostly a statement about where businesses are going and HPE’s readiness to help them. My issue with the current vision is that it seems to create new silos, with different technologies at each location. To make this a widespread reality, we are going to need tools that span these three locations. We need data management, including security, auditability, and availability that works from data inception at the edge to its eventual burial in the cloud. We also need operational tools that look at the whole as a whole and allow operational activities across all the disparate technologies. Management, monitoring, and even accounting need to be unified across all these locations, as business processes span all the locations where data is being processed.