Recent changes from NVIDIA now make virtualized HPC (vHPC) a reality. With just two NVIDIA M10 cards, it is possible to have more compute power than ever imagined before. NVIDIA’s vGPU is changing how we implement VDI. It is also changing compute. Here is how.

vGPU works by dividing up the underlying NVIDIA chips into multiple discrete graphics adapters. GPUs, unlike CPUs, are designed to run a graphics pipeline per core. While each pipeline can share data, the processing is discrete. This pipeline happens on just one core per vertex in a polygon. Yet, there are thousands of cores per GPU, allowing for many vertices to be processed simultaneously.

However, it is possible to reconfigure the pipeline to run arbitrary nongraphical code or CUDA code. CUDA code is used by high-performance computing (HPC), but also by a number of growing business processes. The business has been using tools that benefit from GPU acceleration via CUDA. This is the HPC component of all GPU devices. Currently, VMware and NVIDIA are targeting the bigger graphical applications from Autodesk, SOLIDWORKS, etc. However, for business, there is now a need for more compute capability for data analysis. This data analysis could be on the desktop, or via big data solutions.

Big data solutions often do massive calculations or comparisons of data. The use of a GPU could speed up that analysis, even shaving days off of current big data mechanisms and computations. Business use for HPC is burgeoning. GPUs are primed to help with this. Virtualized GPUs with CUDA support will change the face of HPC for business, finally forming the first true virtual HPC environments since virtualization began. It was possible to create a vHPC environment in the past, but computationally, it was third rate. With vGPU, HPC is a reality.

We need GPU, and therefore HPC, to perform the security analysis, business logic, and massive calculations needed by the business today. We even need them to provide endpoint security functions. Yet, whenever I talk about HPC, people think of academia. This is just no longer the case. Self-driving cars, business logic, and big data are all used by the business. They are no longer part of academia. They are here as a part of reality. These computational devices can also be used by IoT.
virtualized HPC in the fogIn the above image, sensors feed into the fog, which feeds into the cloud. The cloud is where most heavy lifting takes place. However, the fog is where data can be smoothed out, refined, and even converted into what is acceptable for the cloud to process. The fog is talking to millions of sensors of varying versions and degrees of capability. Those sensors’ data needs to be preprocessed so that the stream coming out of the fog is somewhat normalized. That normalization will be done by GPUs using vHPC capabilities. Normalization will allow the multiple fogs around a city, for example, to feed a back-end cloud or data center with good data—data that will aid in whatever analysis is being done.

This also means that those systems in the fog can fix or account for bad sensors still sending data. vHPC allows the processing to take place in a small form factor while using the cloud for heavy processing requirements. Business use is not just IoT preprocessing, but also processing business logic that is getting more complex as well as general data needed to keep systems healthy. The principles used came from HPC and now apply to the business.

Find out how your business processes data. Can GPUs speed that up? Could GPUs replace large clusters of compute resources? Most likely. Virtualized HPC using GPUs is the future of business; it is the future of IoT; and it is the future of IT. The future is about data: not systems, not VMs, not containers, but data.

How we process that data makes a big difference to our businesses. How do you process your business data?