Downsizing, Rightsizing, Cloudsizing? Will today’s datacenter follow yesterday’s mainframe?

This blog was published originally as column at ITSM portal

Many hypes in IT are just the same old idea, launched again, but with better technology and under a new name. Who remembers Larry’s original network computer? And who is just about to buy one, but now based on Android or iOS4? Similarly, we could say for the datacenter: “The Datacenter is dead, long live the Virtual Datacenter”. The danger of this approach is that we treat the Virtual Datacenter just like any new type of infrastructure and simply rehost our existing applications by moving them from physical to virtual machines (P2V). Just as we rehosted our applications from mainframes to minicomputers in the days of downsizing.

But if we only “rehost”, we will miss out big time on the potential benefits of virtualization: just cost and energy reductions, but also business and IT agility, management efficiency, market responsiveness and service improvements. And there are several warning signs that this is exactly what is happening. The first warning sign came from David Linthicum who signaled that “Bowing to IT demands, cloud providers move to reserved instances”. In that article he showed how Amazon “users can make a one-time, up-front payment to reserve a database instance in a specific region for either one or three years.” Upfront payment? Specific location? Three years? Sounds pretty much like buying a server to me. Now David saw it as a necessary evil to get “reluctant enterprises over the cloud fence”, but to me it was a first signal that traditional behavior was making it across to a new type of infrastructure.

This week this was confirmed in “Despite the promise of cloud, are we treating virtual servers like physical ones?” a blog by my colleague Jay Fry. He picked up on that fact that – according to recent market and vendor numbers – virtual servers at leading cloud servers are getting bigger and are used for significantly longer periods. Now “longer” is a relative term, I remember telling my (sales) manager that sales cycles were getting longer, after which he pointed out that if my customers never bought, the sales cycle would become infinitely long. Same here, if a workload is loaded on cloud servers but never removed, you get what our friends at VMware call the “Hotel California Syndrome” ( “You can checkout any time you like, but you can never leave!”). As a result, the use of the cloud becomes similar to leasing hardware. You don’t own it, but you are still solely responsible for the usage. And as Jay points out in his blog, that was never the point of cloud computing, it was all about sharing and elasticity.

More serious is that this type of simply rehosting does not add any value for the users. Users never cared whether something ran in the back on a mainframe or a mini and likewise they won’t care whether it runs on a physical box, a virtual box or even a shoebox. What they care about is ease of use, flexibility, connectability, scalabilty, functionality and cost (and probably in that order). Traditional downsizing was often done purely for cost reasons, and initially the savings were quite considerable. So considerable that many started to declare the mainframe dead, a rumor that turned out to be greatly exaggerated. Pretty soon “the mainframe” reinvented itself and became more efficient, more connectable, more flexible and as a result greatly reduced its cost per transaction (the only cost that counts). Funnily enough we already see the same happening with datacenters: under the name “private cloud” they are rapidly becoming more efficient, flexible, scalable, etc. Let’s face it, a private cloud is basically a datacenter with a fancy name, it is no more elastic or shared than leased servers. You’re still limited by your available resources. (Which,BTW,does not mean it cannot be a lot more agile, scalable and cost effective than a traditional datacenter.)

The big question I have is whether the datacenter will follow the mainframe with regard to new applications. The rejuvenation of the mainframe stopped the further rehosting of applications to mini’s (also because the remaining applications were the biggest and most complex ones left). But nobody implements new applications on their mainframe anymore. New applications by default were installed on (Unix or Windows) mini’s. Mini’s which by the way by that time were as big and as fast (and as expensive) as the modernized mainframes. And software vendors like SAP were even bringing our their new applications (SAP R3) exclusively for the new platform, even if they had been extremely successful on the old platform with R2. I guess you see where I am going. Several software vendors are now building their new applications exclusively for the cloud.

Will the traditional datacenter share the same fate and get more efficient at what it runs today, but not see many new applications enter its doors anymore? Now, it is early days. Cloud is just getting started (think of the era of PDP computers). But pretty soon some vendor or group of vendors will coin the term “Open Cloud” (remember the Open Systems scam) and that will be the end of it. Now, sure, some applications will not be cloud-suitable, just like some applications still run on mainframes, despite their owners attempted to rehost them roughly every other year during the past decade. Many finally gave up, it was too hard and too complex, and outsourced them altogether. Funnily (or sadly) enough we see a similar phenomenon around virtualization; we call it virtual stall. After virtualizing about 35% of the servers, many virtualization initiatives stop. After that, it becomes too hard and too complex. Now there may be some applications not suitable for virtualization, but I am sure it is not 65%, it might be 10% (similar to what we saw in mainframes).

The reason these initiatives stall are varied. An important reason is complexity. A distributed datacenter is light years more complex than any mainframe and adding virtualization adds even more complexity. But that does not mean it cannot be done. Today cars are also lightyears more complex than a Model T Ford, yet today’s mature garages manage to run and maintain them more reliably and more (fuel) efficiently. Maturity being the keyword here, using a virtualization maturity model, IT departments can get the complexity under control and reap the benefits of an almost fully virtualized datacenter. And don’t underestimate the true benefit of that, even if we add all new applications exclusively to the cloud, it will be decades before the majority of modern organizations applications will be running there. We call these applications affectionately our legacy or installed base, it was not build overnight and for sure it won’t disappear overnight either.

Now we mentioned a lot of hardware in this blog, while I normally only talk about software. But while on the topic, it is funny to see how two non-traditional platforms are rapidly making in-roads into the traditional datacenter, potentially replacing the traditional incumbents and getting an unusual enthusiastic reception by their users (see links). One is the Cisco UCS platform, a platform designed from the ground up to run virtual workloads (user review). The other one – surprise, surprise – is the next generation … mainframe (review). It’s designed around the fastest CPU on the market today, it is gunning to become the backbone for loads and loads of distributed servers, currently for Linux and AIX, but soon also for other platforms. So even if today’s datacenter may be past its Prime (pun intended), soon it will be a cool place to live and work (and I don’t mean because of the air-conditioning).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: