Microsoft is preparing to launch a new range of GPU-enabled virtual machines. Built using NVIDIA Tesla-series M60 and K80 GPUs, the new virtual machines offer the fastest GPUs available in the public cloud. This move leapfrogs Azure over AWS in both performance and number of supported platforms.

Amazon G-Series

Amazon has been a long-term player in the GPU-enabled cloud computing market. It introduced its first GPU-enabled virtual machines in November 2013, with two virtual machine instances: the g2.2xlarge, with a single NVIDIA GRID GK104 GPU with 1,536 CUDA cores and 4 GB of video memory, and the cg1.4xlarge, with NVIDIA Tesla “Fermi” M2050 GPUs. The cg1.4xlarge was subsequently withdrawn and replaced with the g2.8xlarge in April 2015. Amazon described the g2.8xlarge as having four NVIDIA GRID K520 GPU boards, which use the same GK104 GPU seen in the g2.2xlarge.

Instance vCPUs Memory Storage GPU Price
g2.2xlarge 8 15 60 GB 1 – GK104 $0.767 per hour
g2.8xlarge 32 60 2 x 120 GB 4 – K520 $2.878 per hour

Azure N-Series

Where Amazon offers just two GPU-ready platforms, Microsoft has announced no fewer than seven separate virtual machine configurations that are targeted at two distinct workloads.

Instance Cores Memory Storage GPU Price
NV6 6 56 GB 340 GB 1 – M60 $0.73 per hour
NV12 12 112 GB 680 GB 2 – M60 $1.46 per hour
NV24 24 224 GB 1,440 GB 4 – M60 $2.92 per hour
NC6 6 56 GB 340 GB 1 – K80 $0.66 per hour
NC12 12 112 GB 680 GB 2 – K80 $1.33 per hour
NC24 24 224 GB 1,440 GB 4 – K80 $2.66 per hour
NC24r 24 224 GB 1,440 GB 4 – K80 $2.99 per hour

The NV-series instances with the NVIDIA M60 GPUs that offer 2,048 CUDA cores per GPU are targeted at high-performance professional graphics workloads running on virtual desktops. The NC series focuses on compute workloads using NVIDIA Tesla K80 GPUs that offer 2,496 CUDA cores per GPU. Microsoft also offers the NC24r, which is tuned for tightly coupled parallel computing workloads with a second low-latency, high-throughput network interface.

It’s not possible to perform a direct comparison between the AWS and Azure instances due to the difference in CPU specifications, but as can be seen from the two tables, the Azure instances look to offer a better value. The Azure instances offer significantly more memory and storage. They also have the newer, higher-performing K80 and M60 GPUs, which provide a significant increase in the number of CUDA cores over the Amazon G2-series instances. As always, the leapfrogging between generations of cloud computing platforms will favor the newest provider; however, on this occasion, it does look as though Microsoft has established a substantial lead that Amazon will not be able to counter without retiring its current G2-series machines.

Each of the Azure instances offers twice the performance of the last and, depending on application performance characteristics, should deliver near-linear performance improvements. This will enable customers to match workloads with performance requirements with few, if any, compromises. Alternatively, they can distribute one workload across multiple nodes for greater resilience without any cost penalty. 

Teradici PCoIP

In a departure from previous offerings, Microsoft has taken the opportunity of launching the N virtual machines to introduce support for Teradici PCoIP as an alternative to RDP. PCoIP offers significant performance advantages over RDP, especially when delivering rich graphical workloads. While PCoIP is used extensively in VDI implementations, there is no indication from Microsoft that it will be building its own VDI service using NV-series instances. Instead, the NV-series virtual machines are being positioned as dedicated cloud hosted workstations that can be provisioned on demand via Azure console. Of course, there is nothing to stop a third party from using NV-series instances as part of a VDI/DaaS environment.

The N-series virtual machines are currently available as a tech preview out of the North America South Central region; they will be rolled out to additional regions over the next few months before they reach general availability, which is scheduled to take place before the end of the year. GPU availability has not yet trickled down to Azure RemoteApp. However, with increasing awareness of the benefits of incorporating GPUs into virtual desktop and RDS environments, it is inevitable that Microsoft will introduce a GPU-enabled RemoteApp service at some point in the future.

3 replies on “Microsoft Readies Azure GPUs”

  1. Addendum

    Amazon has been in touch to ask me to point out that it introduced GPU services in November 2010 and not November 2013 as I indicated above. The 2010 implementation was offered as a High Performance Computing (HPC) platform delivered in a cluster configuration and optimised for network intensive massively parallel workloads.

Comments are closed.