This blog was published originally as column at ITSM portal
Many hypes in IT are just the same old idea, launched again, but with better technology and under a new name. Who remembers Larry’s original network computer? And who is just about to buy one, but now based on Android or iOS4? Similarly, we could say for the datacenter: “The Datacenter is dead, long live the Virtual Datacenter”. The danger of this approach is that we treat the Virtual Datacenter just like any new type of infrastructure and simply rehost our existing applications by moving them from physical to virtual machines (P2V). Just as we rehosted our applications from mainframes to minicomputers in the days of downsizing.
This week this was confirmed in “Despite the promise of cloud, are we treating virtual servers like physical ones?” a blog by my colleague Jay Fry. He picked up on that fact that – according to recent market and vendor numbers – virtual servers at leading cloud servers are getting bigger and are used for significantly longer periods. Now “longer” is a relative term, I remember telling my (sales) manager that sales cycles were getting longer, after which he pointed out that if my customers never bought, the sales cycle would become infinitely long. Same here, if a workload is loaded on cloud servers but never removed, you get what our friends at VMware call the “Hotel California Syndrome” ( “You can checkout any time you like, but you can never leave!”). As a result, the use of the cloud becomes similar to leasing hardware. You don’t own it, but you are still solely responsible for the usage. And as Jay points out in his blog, that was never the point of cloud computing, it was all about sharing and elasticity.
The big question I have is whether the datacenter will follow the mainframe with regard to new applications. The rejuvenation of the mainframe stopped the further rehosting of applications to mini’s (also because the remaining applications were the biggest and most complex ones left). But nobody implements new applications on their mainframe anymore. New applications by default were installed on (Unix or Windows) mini’s. Mini’s which by the way by that time were as big and as fast (and as expensive) as the modernized mainframes. And software vendors like SAP were even bringing our their new applications (SAP R3) exclusively for the new platform, even if they had been extremely successful on the old platform with R2. I guess you see where I am going. Several software vendors are now building their new applications exclusively for the cloud.
Will the traditional datacenter share the same fate and get more efficient at what it runs today, but not see many new applications enter its doors anymore? Now, it is early days. Cloud is just getting started (think of the era of PDP computers). But pretty soon some vendor or group of vendors will coin the term “Open Cloud” (remember the Open Systems scam) and that will be the end of it. Now, sure, some applications will not be cloud-suitable, just like some applications still run on mainframes, despite their owners attempted to rehost them roughly every other year during the past decade. Many finally gave up, it was too hard and too complex, and outsourced them altogether. Funnily (or sadly) enough we see a similar phenomenon around virtualization; we call it virtual stall. After virtualizing about 35% of the servers, many virtualization initiatives stop. After that, it becomes too hard and too complex. Now there may be some applications not suitable for virtualization, but I am sure it is not 65%, it might be 10% (similar to what we saw in mainframes).
The reason these initiatives stall are varied. An important reason is complexity. A distributed datacenter is light years more complex than any mainframe and adding virtualization adds even more complexity. But that does not mean it cannot be done. Today cars are also lightyears more complex than a Model T Ford, yet today’s mature garages manage to run and maintain them more reliably and more (fuel) efficiently. Maturity being the keyword here, using a virtualization maturity model, IT departments can get the complexity under control and reap the benefits of an almost fully virtualized datacenter. And don’t underestimate the true benefit of that, even if we add all new applications exclusively to the cloud, it will be decades before the majority of modern organizations applications will be running there. We call these applications affectionately our legacy or installed base, it was not build overnight and for sure it won’t disappear overnight either.
Now we mentioned a lot of hardware in this blog, while I normally only talk about software. But while on the topic, it is funny to see how two non-traditional platforms are rapidly making in-roads into the traditional datacenter, potentially replacing the traditional incumbents and getting an unusual enthusiastic reception by their users (see links). One is the Cisco UCS platform, a platform designed from the ground up to run virtual workloads (user review). The other one – surprise, surprise – is the next generation … mainframe (review). It’s designed around the fastest CPU on the market today, it is gunning to become the backbone for loads and loads of distributed servers, currently for Linux and AIX, but soon also for other platforms. So even if today’s datacenter may be past its Prime (pun intended), soon it will be a cool place to live and work (and I don’t mean because of the air-conditioning).