It has been just over two years that the Cisco Unified Computing System (UCS) was announced and released to the world. I wanted to give my feedback on the progress of the platform and how it is fitting into the Cloud Computing space.
When Cisco announced their Unified Computing Platform a couple of years ago, their thinking was not to just design and get into the server hardware business, Cisco’s goal was to and become the heart of the datacenter itself. This was a big move by Cisco considering, that they had a very good working relationship and partnership with HP at least until the announcement that Cisco was getting into the server business.Was the move into the server business a smart move for Cisco? After two years the jury is still out, but Cisco has shown they have a vision and something they are actively working towards as well as bringing enhancement and innovation to the market place. The Cisco Extended Memory Technology, developed by Cisco, is a custom memory controller chipset that virtualizes DIMM’s as seen by the CPU memory controller. With this Cisco ASIC, each memory channel can now support eight DIMM’s per channel instead of three without any reduction in the memory bus speed. With this memory virtualization, we can significantly increase the amount of memory that can be installed in a server. This technology allows eight DIMM’s per memory channel or twenty-four per CPU socket. In other words, a full height blade can have up to 384GB of memory installed while the memory buss speed still runs at 1066Mhz. At the time of this writing no other manufacturer has been able to scale the memory beyond six DIMM slots per CPU without a reduction in the memory buss speed to 800Mhz.
Currently, the UCS system is a dual processor quad core system and is run on Intel chips. Intel has had a significant investment in the UCS platform and I think it is safe to say that all the UCS servers will be running Intel chipsets only. There was a YouTube video that came out of the AMD camp showing off the sixteen core processor AMD is in development and I think it will be a safe bet that Intel is not that far behind, if it all behind AMD. Jumping ahead with that knowledge we can think about a full height UCS blade with thirty-two cores and 384GB of RAM. That could give us a total of four full height blades with 128 cores (32×4) and 1536GB (384×4) of RAM per chassis. That would be one powerful chassis and The CISCO UCS Manager can handle, in theory, up to forty chassis per management platform. That is an extreme amount of computing resources available at the heart of the datacenter.
One of Cisco’s big selling points for the UCS system is the ability for all the physical servers in the chassis to be completely stateless. What do I mean by stateless? Stateless is the unique identifiers that give the servers their identity. There are four unique identifiers that make up a server’s unique identity and these identifiers are tied strictly to the hardware in the physical server. These identifiers are listed as follows:
- World Wide Name (WWN): Hard coded to a host adapter (HBA), this identifier is needed to access a SAN
- MAC Address: Hard coded to a network interface card (NIC), this identifier is needed for LAN access.
- BIOS: This identifier contains settings that are specific to the server hardware.
- Firmware: Low level software which runs on a peripheral device and adapter card to enable the operating system to interface with the device.
By utilizing a stateless system, I can move servers from one physical server to another without being tied to the hardware. This is especially useful when working with applications that have security or licensing tied to a specific hardware identifier. That server could be migrated to another physical box with absolutely no change in any of the hardware identifiers. This ability can become extremely valuable when working on disaster recovery planning or hardware maintenance, as an example. When working with virtualization though, I am not sure stateless servers are really needed. When working with the physical hosts in a virtual environment I am not sure the unique identifiers are going to be as much of a factor as the non-virtualized servers running on the blades themselves. I can install ESXi in a matter of minutes and with the ability to use host profiles I can quickly have the new virtual host in place and ready to serve out resources in no time. I could even create and install multiple virtual host server instances to be ready to go, when needed, to quickly add physical resources to the infrastructure.
VMware, Cisco, and EMC have teamed up to create the Virtual Computing Environment (VCE) Coalition with the UCS platform being at the heart of it all. When Cisco was creating the UCS platform they knew that virtualization would play a key part in the environment and to help with virtualization, Cisco created a specific mezzanine adapter to work in the UCS servers strictly to enhance the virtualization capabilities of the server itself. The Cisco UCS VIC M81KR card was developed. This mezzanine adapter, also known as the Palo card, was developed as a virtual interface card for working with VMware virtualization specifically in the UCS environment. The Palo card is a converged network adapter (CNA) with dual 10-Gigabit Ethernet ports and dual Fibre Channel ports to the backplane. This card can also create and present virtual NICS and Fibre Cards to the physical virtual hosts in the chassis. The Palo card is not the default mezzanine card shipped with the UCS systems and must be specifically ordered with the system. The Palo cards are roughly the same cost as the Emulex and Q-logic cards that are shipped with the system by default. The Palo card is just one part of the network innovation Cisco has brought to the table. Network Interface Virtualization is Cisco goal of getting physical hardware to solve virtual networking issues. You can read more about this in this post by Edward Haletky.
Cisco’s part in the VCE is networking and with the release of VMware vSphere we now have the ability to use Distributed Virtual Switches (DVS). A Distributed Virtual Switch now lets the vSwitch connect and service all the hosts in a cluster from a central management point. Before DVS we were able to create local vSwitches on the hosts and configure uplink ports for those vSwitches but each local vSwitch had to be configured the same on each host in the cluster. Cisco has released a network appliance called the Cisco Nexus 1000v to enhance the native DVS capabilities that VMware has released in vSphere but Cisco has also added DVS capabilities to the UCS system from a hardware layer that can be managed and controlled from the UCS management console.
These are just a few points on what the Cisco UCS platform is capable of and brings to the virtualization table. Cisco has bridged several different technologies from the virtualization, network and storage layers and brought them together under the UCS management with the ability to create the different roles and responsibilities of the different areas that make up the complete virtualization package and allows those different groups to help there different parts under one roof.
I think Cisco has done a good job in the last couple of years since entering the server hardware space with the UCS platform and are looking for new and exciting things that will be built and presented around this computing environment. The competition can’t be to far behind in their own innovation so it is still a little early to know for sure if Cisco will lock up and control the server space as well as the network to position itself as the center of the datacenter it will need to keep moving forward with enhancements and capabilities and I would not be surprised if Cisco even considers coming out with its own version of storage to work with the UCS platform. Why stop with just the server and network hardware? Time will tell but it should be an interesting next few years.
Steve,
Great article about UCS and honestly the stateless and EMT for higher memory use is the key to sales team but also has to credit the power of vBlock integrations that make UCS an appealing solutions for new data center build but for existing or upgrade sales team still having a hard team competing with HP blades as well as EDS team from HP consulting perspective they have bigger existing influence and reputation with majority of federal or other government data center. So it’s tough to win business with existing HP solutions so Cisco would have to compete much harder and with 128 cores chassis isn’t a big deal considering HP new rack servers max at 128 cores and cost under $100,000 anyways love reading your posts and looking at more great UCS posts.
Your are right in that vBlock intergration does help make UCS an appealing solution but with Cisco wanting to be the heart of the datacenter, What do you think are the chances of Cisco designing their own storage solution?/
Steve,
Good article – on the storage side, NetApp and EMC are critical for selling the UCS, the storage sales teams and channels tend to have more experience with servers than the traditional networking channel. So this brings us to your question – would Cisco design their own storage solution? As a former EMCer, I heard this chatter for 10 years and while the VCE Company activity has tight ties between Cisco and EMC (CEO is Capellas who sits on the Cisco board and new President Hauck is from EMC, not to mention the close relationship of Tucci and Chambers), there were people at SNW this week asking the same question since every other major server vendor owns storage technologies. With Cisco’s latest issues (see WSJ, GigaOm and all the rest of the press on that story), it doesn’t seem the best time for Cisco to go on yet another move into adjacencies that would harm partnerships.
Cheers,
Stu
Wikibon.org
Thanks for the feedback Stu!! Leaves for great speculation though and who knows in 5 or 10 years how things will look and shape up. Should be an interesting next decade.