Read Part 1 here.
How does Infrastructure as a Service (IaaS) impact the role of our Cloud IT Manager? Well, first of all, he will need to learn some new skills, the first one being virtualization management. Second, to have any chance of deploying his current intertwined spaghetti of applications into the cloud he needs to find a method to disentangle these applications. Deploying virtualization on an in house infrastructure (an internal cloud) can be a very workable catalyst here. Also he needs to find a way to offer his applications just as cost effective and scalable as “competing” SaaS providers. Again virtualization may be the way to do so. Does this mean virtualization is all that matters? No, but without it, any “cloud” attempts are the same as plain old outsourcing, hosting or time sharing.
This virtualization needs to go hand in hand with Automation, together they form the building blocks of any cloud. A cloud environment implies dynamically scaling up and down capacity, based on demand. This is not possible fast enough if we create and configure our virtual machines manually. That is where automation comes in. Doing automation without virtualization would not work, as the complexity of the provisioning tasks to be automated would be just too high. This is not the only reason for deploying automation, apart from cost savings we want our Cloud IT manager to spend more time on business and less on technology, or if you like more time with users and less with plumbing, as this is a crucial cloud benefit.
Probably just as important as virtualization and automation for a successful cloud strategy is a reliable infrastructure for accessing the cloud (a.k.a. the network). Unlike with traditional PC applications, users of cloud applications are very unforgiving for any network outages or delays. A provider of SaaS bookkeeping applications in the Amsterdam area lost a significant number of his SME customers after a two-day outage of the major local Internet provider. Our Cloud IT manager is likely to lose his job if he allows for similar mishap in an enterprise environment. Together with Security this is one of the reasons why companies are starting today with an “Internal Cloud“, even if their long term strategy is to leverage the public cloud (see question 4 in this Wall Street Journal Cloud pop quiz).
Another important thing to realize is that in the majority of cases “the Cloud”, internal or external, will initially be an additional set of infrastructure. In most organizations the introduction of Mini’s did not replace the Mainframe and neither did the WinTel space heaters replace all Unix Servers. Anyone who believes “the Cloud” will replace all in house systems, may also believe we can fix the climate problem by everybody driving electric cars by 2015. In addition companies will not have “one cloud”. They will source cloud resources from multiple vendors, to optimize cost and balance risk. This means that, instead of reducing complexity, any cloud effort may initially increase complexity.
If you felt having good change processes and reliable configuration data was important in today’s relatively stable datacenters, guess how crucial this will be is in a dynamic “provision to order” cloud environment, where virtualization enables a certain process to use different (virtual) resources every day, hour or even minute. We all know the stories about IT departments that are afraid to switch off a certain server, because they have no idea what it does. Imagine this being a virtual server that we are paying for by the minute. We better understand which (business) processes this server is supporting, so we can decide whether it is safe to switch it off or not.
Helping understand and manage all this complexity if of course exactly where frameworks like ITIL and COBIT come in. Ideally they help us to build an understanding of goals, risk, cost, configurations and especially interdependencies to offer a transparent view, also across these various infrastructure platforms. But we need more than just a view. Ideally we want to be able to dynamically move applications to the (cloud) platform that is the most cost or energy efficient. This demands an integrated view across platforms, and although frameworks like ITIL and COBIT are infrastructure agnostic, most of us unfortunately deployed these frameworks with platform or department specific procedures and processes. Some integration of these procedures will need to be done. Let’s make sure we do not create yet another set of cloud procedures to be filed next to our mainframe and windows procedures, especially as virtualization enables us to eventually move applications – more or less freely – across these platforms.
Last item to mention here separately is Risk. Last week the Times online addition spoke about Stormy times for cloud computing in the context of Microsoft and T-Mobile’s Sidekick data loss mishap. Dismissing cloud because of risk would be throwing out the baby with the bathwater. I know of no companies that build their own hard disks, because they do not trust hard disk vendors. They do take precautions: they don’t buy the cheapest, keep a backup, and have a recovery plan. That is why any cloud IT investments, not just IaaS, should go through the proper (IT) channels. So they can be screened for risk, security and cost issues. Risk management and Cloud Deployment have to go hand in hand and COBIT is a good way to connect them.
In my next blog post: SaaS and the role of our Cloud IT Manager.