I recently upgraded my entire infrastructure. The cost of going to 3 new servers at the latest hardware compared to going to a blade center was finally equal. In general, 3 2U DL380s are less expensive than upgrading to a c3000 blade enclosure and over the last several years I have been fighting with this decision. Finally, it was made for me. The price was attractive enough. 3 Blades were purchased with space for 5 more available within the enclosure. This upgrade-ability is what pushed me over to blades from the lower cost DL models.
I looked at this upgrade as a way to also turn off more power supplies and hopefully draw less power overall. The goal was to power off my separate fabric and network switches and rely on what I had within the c3000 enclosure. This was unfortunately, not the case. I had to order more expensive Interconnects to make this happen, and even so I chose to go with existing network switches. More on this a little later.
The upgrade consisted of a HP C3000 Blade Enclosure, 3 BL460c (24G, 2 6Core CPUs), 2 Virtual Connect FLex-10, and 2 Virtual Connect FC adapters. Yet, this had the goal of zero downtime to my infrastructure. I cannot afford to be down for any length of time for my core services.
So how did this go?
Installing the C3000 and all the bits into the Enclosure took surprisingly little time as it is plug-n-play. It took longer for me to move the first UPS into the new rack than it did to assemble and place the C3000 into the Rack. Once that was done, I was able to provide power to two of the four C3000 power supplies (you can have up to 6, but I only need 4 for redundancy). Yet, I could not power on any of the blades. This was a bit confusing until I reset the dynamic power management in the HP C3000 to not provide redundancy by AC line (I only had one power line at the moment). Then I was able to power up a blade. This configuration was made via HPSIM talking to the blade enclosure whose IP was set using DHCP. This I also changed using the HPSIM integrated interfaces which allowed me to open up the web management tool for the enclosure. I like the way HPSIM allows me some centralized management else I have to remember 20 or so management web tools and how to access them.
Next it was time to upgrade all the firmware, to do that I once more went to HP SIM and configured HP SUM to do the update of all the components within the enclosure and then each blade. Now to configure Virtual Connect. This step was a bit odd. I configured the FLEX-10 with two uplinks initially. One for my internal network and one for my external network. Then I configured the Virtual Connect Fibre channel and came across my first issue. This was that VC-FC requires you to use uplinks to switches. Now I had switches so that was no problem to solve, but I wanted to power my fabric switches off at some point. To solve, this I went back and ordered the Brocade 8/24 Fabric switches that went into the enclosure with an RMA on the VC-FC interconnects (but that is a bit ahead of myself).
I was then ready to install vSphere ESX v4.1 onto the first blade. This went without a hitch. The second blade also went without a hitch. Actually, everything went with out a hitch. I was able to Zone my FC Array properly and vMotion VMs from the old infrastructure to the new without many issues. The key was to properly set up the Virtual Connect software to see the arrays as well as setup the proper networks with the proper uplinks to my new Netgear switches. I was able to setup a few private Flex-10 networks as well as my standard two (one to the outside world and one internal). My internal network gets further segmented to an administrative network for all things that manage my virtual environment (per best practices).
Once the migration was handled, I was able to power off the old hosts with no issues. Now I am running on newish blades with less power supplies/power in use. One last change was to implement the Brocade 8/24 fabric switches. Once they arrived it was simply a case of replacing one VC-FC adapter with an 8/24, rezoning the array then once that happened, I was able to replace the second VC-FC adapter and once more rezone so the 2nd 8/24 could see the device.
All I can say is that redundant components in my blade enclosure have made maintenance a bit easier. Actually, redundant components within all my hardware, make my life so much easier. From redundant controllers, power supplies, and now Blade adapters. I have kept one DL380G5 in the enclosure, just in case or emergencies but it is powered off and even unplugged. Eventually I will plug it in and build up some more redundancy.
Honest mistake, but your title is a little misleading and may cause confusion. The term “BladeCenter” is the trademarked name of IBM’s blade systems hardware. You might want to change your title so you don’t get complaints from the IBM crowd.