When the VCE coalition first formed in late 2009 their product, the Vblock, was the industry’s first serious attempt at delivering converged IT systems. The first models were the Vblocks 0, 1, and 2, addressing the small, medium, and larger enterprise IT use cases. Over time, these evolved into the Vblock 300 and Vblock 700, relatively high-end computing options. On February 21, 2013 VCE announced the re-addition of smaller Vblock models, Vblock 100 and Vblock 200, once again allowing the product line to cover the small & medium-sized opportunities in the market. It’s been a bit over a month since VCE announced these changes to their product line, and with the products becoming generally available let’s look at some of the technical details, then use those details to make some conclusions about these products.

Vblock 100

All Vblocks ship in their own 19 inch rack cabinets. With the 100 you now have a choice, though: 24U in the Vblock 100 BX models or 42U cabinets on the DX models. This directly affects future expandability, as each of the models is designed to accept additional storage and computing resources. The BX only has room for one more server, and three more disk enclosures. The DX is more expandable with up to 8 servers and four additional disk enclosures.

The two models have different disk arrays. The smaller BX has an EMC VNXe3150 disk array, while the DX has its slightly larger brother, the VNXe3300. Each array has the option of two different disk enclosures, one that is geared for performance, one geared towards capacity.

Physical network connectivity within the Vblock is IP-based, running through a pair of Cisco Catalyst 3750-X Ethernet switches, 24 gigabit ports on the BX and 48 ports on the DX, with a pair of 10 Gbps uplinks that connect to the VNXe storage controllers. Logical connectivity is supplied via the Cisco Nexus 1000V virtual switch running inside VMware vSphere. The Cisco UCS C220 M3 servers in use in these Vblocks have either 64 or 96 GB of RAM in the BX and DX models, respectively. Each server has two Intel E5-2640 CPUs, each with six cores. Up to half of the servers in a Vblock can be used as “bare metal,” without a hypervisor. Of course, on these smaller models that isn’t a lot.

Vblock 200

The Vblock 200 follows the same basic recipe as all the other Vblocks, geared towards higher I/O than the Vblock 100. The storage has been replaced with an EMC VNX 5300, EMC’s mid-tier multiprotocol disk array option. Ethernet connectivity is supplied by a pair of Cisco Nexus 5548 10 Gbps switches.

VCE Vision

Along with the Vblocks 100 and 200, VCE announced incremental refreshes of the Vblock 300 and 700 as well as a new management plug-in for VMware vSphere, VCE Vision. This software release finally starts delivering on the promise of integrated management, and treats the Vblock as a single managed unit rather than a delivered set of discrete components.

Thoughts

VCE no longer ignoring the smaller end of the market is nice, especially for enterprises that have high-end Vblocks in their main data center and would like to drop a smaller Vblock out at a remote site. VCE finally adding better management capabilities is a huge win, too. One of the promises of converged infrastructure is converged management, and having to manage a setup such as a Vblock as collection of discrete components runs counter to why many people purchase these solutions.

What also stands out to me as a problem is the density of these low end Vblocks. Most converged infrastructure vendors intentionally limit choice in order to drive up standardization and drive down price and total cost of ownership. That means limited options for expansion, and limited upgrade-ability of the components within, and is usually a justified tradeoff.

Generally speaking, virtual environments tend to consume RAM at a much faster pace than they do CPU, especially in the small & medium enterprise space. With no memory expansion options and limited server expansion options customers may find themselves out of capacity pretty quickly, despite having plenty of CPU and storage. Virtual desktops are a great example, where higher consolidation ratios are normal and it will be easy to outstrip the RAM capacity of these Vblocks. VCE also does not offer an option for SSD or flash storage, which is becoming a crucial part of even small VDI deployments.

It’s clear that VCE is selecting components to achieve a particular price point, but use of the C-series UCS servers instead of the more flexible and space-efficient B-series blades is a strange choice to me. Use of the B-series would also provide better networking opportunities than the ancient architecture of the Catalyst 3750 switches can provide, including possible connectivity with legacy data center infrastructure, like fibre channel SANs. As it stands, though, the Vblock is what it is, a unified IT island in your data center.

3 replies on “Digesting The Latest VCE News: Vblock 100 and Vblock 200”

  1. Thanks for the write up and a few clarifications… VCEer here for identification.

    The 200 has VIC cards inside the 200s which allows 10GB FCoE to the 5548s

    The 200 uses 8GB FC connections to the 5548s to deliver FC connectivity. Unified is an option by adding X Blades for NFS/CIFS.

    The VNXe arrays in the 100s do not have SSD options. The 200 on the other hand uses a VNX5300 and has FastCache SSD drives by default and more SSDs can be added as necessary.

    The 200 is definitely the 100s bigger brother a suitable for heavier workloads while the 100 is meant to be a smaller branch solution you stick in a closet.

    The C series were chosen because of hitting a price point to make it affordable. It wouldn’t be worth it to have a single UCS chassis with 8 blade servers. That leaves a single point of failure. The scaling limit of the amount of compute to storage ratio is chosen to make sure there isn’t an unbalanced approach.

    The 3750s, again, were chosen because its a simple IP based solution and these hit a price point to bring costs down much further than Nexus switches.

    Read more here
    kendrickcoleman.com/index.php/Tech-Blog/vce-mega-launch-technical-details-for-221-product-announcements.html

  2. Thanks man. The details on the 200 are a little slim right now, as there’s no technical architecture docs online like there are for the 100, so it’s all scavenging. 🙂 This helps a lot. I do respect the redundancy needed in the box but there’s a heck of a lot of redundancy built into blade chassis solutions, too.

    Regardless, I still wish they had even double the amount of RAM. For what you’re paying for one of these that shouldn’t be a big price jump.

Comments are closed.