Thinking about Blades? Downsides to consider…
Stephen Foskett is running a series about server blades — and as usual for someone who gets a lot of trial equipment to review, he’s pretty bullish on them.
After a few years with blades at my current company, I’m not. Unless you need the density that they offer, they’re probably not worth your time and money — and if you can afford them, you can probably afford to lease another rack or a larger cage.
While Stephen does an excellent job of covering the high points of blades, he skips or glosses over the downsides. The downside is that you re-introduce several single points of failure in the form of the backplane and modules that are plugged into the chassis, and extra management overhead of switches attached to the backplane, and add risk of heat because of the miniaturized and densely packed nature of the components.
Think this is doom-and-gloom? We’ve got a bunch of hardware sitting in a pile that says it isn’t. One of our IBM BladeCenter chassis has only one slot that will work in it — the rest of the slots give you strange PCI bus errors, KVM won’t work, or the management module will fail to connect to the hardware that’s installed properly. Since the blade’s backplane and management modules are a part of the chassis, IBM declined to replace it under our parts-and-labor warranty agreement — they said that we’d have to replace the entire chassis at our cost since the chassis is not a Field Replaceable Unit.
Troubleshooting problems with parts or upgrading parts on individual blades is a chore. Again, many of the parts aren’t technically Field Replaceable Units (and this includes parts like on-blade flash disks), so you’ll need to get out your oddball collection of Torx heads. It’s like laptop repair, with fine ribbons and cooling ducts stitching together byzantine layers of circuit boards. And let’s add another negative in — even if you cool the systems appropriately and your cooling systems aren’t overloaded and don’t ever overload, you still face heat death problems after the term of a normal warranty. Many higher ed institutions are starting to buy on a five year lifespan instead of the traditional three year lifespan, so high density systems like blades or thumpers are not an advisable solution there.
Many blade chassis are limited on expansion module space. Depending on your I/O configuration, you need at least six expansion slots to have some semblance of redundancy — two management modules, two I/O bus (Fiber Channel, Infiniband, 10gbE, SAS, etc.) modules, and two switch (ethernet) modules. The IBM BladeCenter S and E chassis options only support four modules. The higher end newer options, H and HT, support four high-speed and four legacy — keep that in mind when you’re thinking about expanding. Most of the modules only support six ports, which means that you’d need three modules (high speed slots only, of which you have four!) to support a single full-bandwidth fiber channel connection to a server in each of the 14 bays in an BladeCenter H-series — with no redundancy, and no way to expand further. For environments that need both Fiber and Infiniband, you’re pretty much out of luck.
Let’s not forget that each of the modules usually has a management interface of it’s own. The fiber channel modules have a console that you need to manage separately from any other fiber channel interfaces you might have. The switches have a cisco IOS-like interface to them, unless you buy actual Cisco modules for your blade center. Why’s that a hassle? Keep in mind that you need to manage VLAN and trunking assignments and limits on both your core switch and your blade center’s switch.
So: High-bandwidth environments need not apply, since shared connections are the rule instead of the exception. Environments where an addition or switch to a new technology might be managed by adding three or four PCI cards to the affected servers need not apply — your chassis won’t have room for it.
For all of those “Features”, you gain the ability to save some floor space … and you pay a lot more.
Let me introduce to you a new technology called the “40 blade server” — you take a 42U rack, set up appropriate power modules on it, and then plug 40 1U servers and a pair of switches in at the top. Sure, there’s a bit more wire, but that’s easily managed. The 1u servers are individually less expensive than server blades and have a host of nice features — such as independent KVM and individual expansion card slots — that you won’t find in any blade server.
Admittedly, one place we have been very happy with “Bladed” components is our Cisco routers. The ability to hot swap modules and fail between modules is nice — but it’s something that we could manage without; it’s simply a better way to do things in the Cisco world since the price differential isn’t that high and the equipment lifespan is closer to ten years than to three.
But for compute? Heck with that. I see very few environments where blade centers are a good solution compared to a rack of 1u servers.