A (real-time) blog from VMworld Europe

Originaly published at ca.com/blogs on October 12 2010, 03:56 AM

This is a live blog, so appologies apologies for typo’s and possible other errors.

This morning 6000 people streamed into the Bella center in Kopenhavn to see the VMworld opening keynote. I’ll cover the new items compared to the event in San Francisco. According to VMware’s CMO virtualization is about efficiency (utilization), reliability but foremost about Agility. Business’ more and more rely or even consist completely of IT and if IT is not agile neither is the business. Paul Maritz opens by sharing some statistics: the number of X86 deployments on virtual infrastructure has now superseded the deployments on physical infrastructure (now, of course, most of these physical deployments will still exist tomorrow (on their way to becoming legacy) while the virtual ones may be gone tomorrow). More statistics: there are now more than 10 million VMs growing at 28%. There are more copies of Windows and Linux on virtual machines than on physical machines. Maritz called the virtualization layer the “new infrastructure layer,” making it very clear VMware is creating the new platform (some might say the new Windows).

According to Maritz, this new infrastructure will be very much about automation and management, with automation going first (management is the necessary evil, the value add comes from automation). This new infrastructure is not just about compute power, but also about storage and networking, but the biggest change will be in security. Security also needs to leave the physical realm and move into the virtual realm with security focused on logical boundaries instead of physical boundaries. Getting security right will be very important to enabling this innovation and agility that virtualization promises. Onto private and hybrid clouds and a new way to “purchase infrastructure”. Hybrid clouds is (as described in this blog earlier) defined by VMware as running VMware in your own data center and running VMware in the data center of your service provider(s). Personally I think the world will be a bit more hybrid and so should be hybrid clouds. But it is interesting to see how hybrid is rapidly becoming mainstream (despite apprehensions about security).

Next: Applications. Maritz feels that virtualization is only used to re-host 15 year old (often still batch oriented) applications on a new infrastructure, it will not add the agility the business is looking for. So the new agile world will require new applications build on a new (development) platform. And – no surprise here — Maritz sees a big role for Java running on the Springsource framework (again my expectation is that reality may be a bit more hybrid). And just like the virtualization hypervisor layer, these development frameworks isolate the application further form the hardware and make them more portable. By developing in springsource or similar platforms — like ruby on rails — application portability again becomes a possibility or even a reality. (Had not heard about this Holy Grail for a while, but do remember — and have lived through — the 4GL era, where 4GL tools basically ran your application on anything that came with a plug. Somehow Java killed that movement, but it seems to be making a return as development frameworks that can run on many PaaS platforms (like Google Appworld and Force.com.)

In record speed Martiz went on to Rogue IT a.k.a SaaS applications. He mentioned that 15 applications in VMware somehow made it into the door of VMware and how internal IT is left holding the bag and expected to support them in a compliant and secure manor. Maritz compares this to the entry of rogue PCs and departmental servers entering companies in the ’80s. The observation is correct and it is a real challenge. Maritz feelds IT needs to focus on delivering applications and not on managing and monitoring devices (we see the same challenge and feel that adopting a supply chain approach is the pragmatic answer to this, as opposed to a more traditional manufacturing/factory oriented approach).

At this moment Maritz hands over to the CTO to discuss some of the innovations VMware is working on in its product stack, he starts with the core: Vsphere (called “The virtual Giant” on the slide).

Downsizing, Rightsizing, Cloudsizing? Will today’s datacenter follow yesterday’s mainframe?

This blog was published originally as column at ITSM portal

Many hypes in IT are just the same old idea, launched again, but with better technology and under a new name. Who remembers Larry’s original network computer? And who is just about to buy one, but now based on Android or iOS4? Similarly, we could say for the datacenter: “The Datacenter is dead, long live the Virtual Datacenter”. The danger of this approach is that we treat the Virtual Datacenter just like any new type of infrastructure and simply rehost our existing applications by moving them from physical to virtual machines (P2V). Just as we rehosted our applications from mainframes to minicomputers in the days of downsizing.

But if we only “rehost”, we will miss out big time on the potential benefits of virtualization: just cost and energy reductions, but also business and IT agility, management efficiency, market responsiveness and service improvements. And there are several warning signs that this is exactly what is happening. The first warning sign came from David Linthicum who signaled that “Bowing to IT demands, cloud providers move to reserved instances”. In that article he showed how Amazon “users can make a one-time, up-front payment to reserve a database instance in a specific region for either one or three years.” Upfront payment? Specific location? Three years? Sounds pretty much like buying a server to me. Now David saw it as a necessary evil to get “reluctant enterprises over the cloud fence”, but to me it was a first signal that traditional behavior was making it across to a new type of infrastructure.

This week this was confirmed in “Despite the promise of cloud, are we treating virtual servers like physical ones?” a blog by my colleague Jay Fry. He picked up on that fact that – according to recent market and vendor numbers – virtual servers at leading cloud servers are getting bigger and are used for significantly longer periods. Now “longer” is a relative term, I remember telling my (sales) manager that sales cycles were getting longer, after which he pointed out that if my customers never bought, the sales cycle would become infinitely long. Same here, if a workload is loaded on cloud servers but never removed, you get what our friends at VMware call the “Hotel California Syndrome” ( “You can checkout any time you like, but you can never leave!”). As a result, the use of the cloud becomes similar to leasing hardware. You don’t own it, but you are still solely responsible for the usage. And as Jay points out in his blog, that was never the point of cloud computing, it was all about sharing and elasticity.

More serious is that this type of simply rehosting does not add any value for the users. Users never cared whether something ran in the back on a mainframe or a mini and likewise they won’t care whether it runs on a physical box, a virtual box or even a shoebox. What they care about is ease of use, flexibility, connectability, scalabilty, functionality and cost (and probably in that order). Traditional downsizing was often done purely for cost reasons, and initially the savings were quite considerable. So considerable that many started to declare the mainframe dead, a rumor that turned out to be greatly exaggerated. Pretty soon “the mainframe” reinvented itself and became more efficient, more connectable, more flexible and as a result greatly reduced its cost per transaction (the only cost that counts). Funnily enough we already see the same happening with datacenters: under the name “private cloud” they are rapidly becoming more efficient, flexible, scalable, etc. Let’s face it, a private cloud is basically a datacenter with a fancy name, it is no more elastic or shared than leased servers. You’re still limited by your available resources. (Which,BTW,does not mean it cannot be a lot more agile, scalable and cost effective than a traditional datacenter.)

The big question I have is whether the datacenter will follow the mainframe with regard to new applications. The rejuvenation of the mainframe stopped the further rehosting of applications to mini’s (also because the remaining applications were the biggest and most complex ones left). But nobody implements new applications on their mainframe anymore. New applications by default were installed on (Unix or Windows) mini’s. Mini’s which by the way by that time were as big and as fast (and as expensive) as the modernized mainframes. And software vendors like SAP were even bringing our their new applications (SAP R3) exclusively for the new platform, even if they had been extremely successful on the old platform with R2. I guess you see where I am going. Several software vendors are now building their new applications exclusively for the cloud.

Will the traditional datacenter share the same fate and get more efficient at what it runs today, but not see many new applications enter its doors anymore? Now, it is early days. Cloud is just getting started (think of the era of PDP computers). But pretty soon some vendor or group of vendors will coin the term “Open Cloud” (remember the Open Systems scam) and that will be the end of it. Now, sure, some applications will not be cloud-suitable, just like some applications still run on mainframes, despite their owners attempted to rehost them roughly every other year during the past decade. Many finally gave up, it was too hard and too complex, and outsourced them altogether. Funnily (or sadly) enough we see a similar phenomenon around virtualization; we call it virtual stall. After virtualizing about 35% of the servers, many virtualization initiatives stop. After that, it becomes too hard and too complex. Now there may be some applications not suitable for virtualization, but I am sure it is not 65%, it might be 10% (similar to what we saw in mainframes).

The reason these initiatives stall are varied. An important reason is complexity. A distributed datacenter is light years more complex than any mainframe and adding virtualization adds even more complexity. But that does not mean it cannot be done. Today cars are also lightyears more complex than a Model T Ford, yet today’s mature garages manage to run and maintain them more reliably and more (fuel) efficiently. Maturity being the keyword here, using a virtualization maturity model, IT departments can get the complexity under control and reap the benefits of an almost fully virtualized datacenter. And don’t underestimate the true benefit of that, even if we add all new applications exclusively to the cloud, it will be decades before the majority of modern organizations applications will be running there. We call these applications affectionately our legacy or installed base, it was not build overnight and for sure it won’t disappear overnight either.

Now we mentioned a lot of hardware in this blog, while I normally only talk about software. But while on the topic, it is funny to see how two non-traditional platforms are rapidly making in-roads into the traditional datacenter, potentially replacing the traditional incumbents and getting an unusual enthusiastic reception by their users (see links). One is the Cisco UCS platform, a platform designed from the ground up to run virtual workloads (user review). The other one – surprise, surprise – is the next generation … mainframe (review). It’s designed around the fastest CPU on the market today, it is gunning to become the backbone for loads and loads of distributed servers, currently for Linux and AIX, but soon also for other platforms. So even if today’s datacenter may be past its Prime (pun intended), soon it will be a cool place to live and work (and I don’t mean because of the air-conditioning).

VMworld 2010. Two trends and how they converge.

You may have missed it in the flurry of news from Apple, but VMware recently had their annual get-together at the Moscone Center in San Francisco. On stage VMware shared two key insights: successful virtualization is becoming more about orchestration and automation than about hypervisors.  And, private clouds will rapidly develop into hybrid clouds. I agree on both but believe the combination of these two trends has some distinct consequences that did not get picked up by the media.

Let me start with a disclaimer and some disclosure. I followed the event not on-site but through the California blogosphere reporting on the event, and I work for CA Technologies.

A third trend, by the way, was that more and more vendors (like VMware last week and CA Technologies back in May) resort to using a professional comedian to introduce the concept of cloud computing at their annual customer events.  Somehow IT became so complex we need animations and comedians to explain what we sell. Makes you think – doesn’t it? Imagine Steve Jobs having to ask Jerry Seinfield to explain the iPod.

Of course, we at CA Technologies agree with VMware on the importance of management, but given we are a company making  a living out of selling IT management and IT automation solutions on top of many different platforms, you might say it is a pretty safe bet for us to agree. Embracing the hybrid cloud as a strategy is a bit more of a risky bet for vendors. Private cloud is a great name for marketing reasons. Private sounds safe, while public sounds kind of … well, public or unsecure and scary. And even though many technical people understand that once you can provision automatically into a private cloud you can provision into any cloud, the typical corporate IT buyer and his partner in buying decisions – the CFO – are still quit conservative.

So, embracing hybrid clouds so wholeheartedly is quite a step for a company that just overcame a similar fear of the unknown – a fear that plagued virtualization.  And to celebrate this hybrid cloud idea VMware even decided not build a private cloud on site, like the datacenter under the stairs of the Moscone center they built last year (which was already a step up from the traditional datacenter that historically is located in a basement).

But unfortunately, cloud computing is not just about hybrid cloud, which was roughly defined at VMworld as having VMware installed in your own datacenter and in the datacenter of your cloud providers.  It is about a hybrid world, where providers run many different kinds infrastructure and platforms if it economically makes sense to do so. How hybrid this new world is was actually acknowledged when Maritz talked about SaaS (Software as a Service). He mentioned that VMware is running 15 SaaS applications, none of which he approved, and none of which they have managed to get under single sign-on yet. The question is, of course, how organizations should address these “rogue applications” and how organizations can gain insight into what they use and who they use it from. At CA Technologies, we see this as an important problem. In response, we teamed up with Carnagie Mellon University to start the Service Measurement Index (SMI) and cloudcommons.com.  Cloud Commons is an independent community that allows you to keep track of what is available in the market, and SMI gives you a way to measure and compare IT services you are using internally and from the cloud.

Since Maritz is a former Microsoft executive, the statements most quoted by the media were the ones referencing Microsoft and Windows.  For Maritz it is no longer about the hypervisor, it is about the application platform. And in my personal view, his eyes are set firmly on replacing Windows, not Hyper-V.  Now, ambition is good, but at the same time, one should be careful not to throw away any old shoes before the new ones fit comfortably. Currently many VM implementations stall at about 40% (as my colleague Andi Mann discussed recently on CIO.com), which compared to early Windows deployments may seem great, but is not good enough for today’s standards. Plus – in my personal opinion – the last thing the corporate world is waiting for is the next Windows. The experience of living through the last one – all the way from 3.1 to 7.0 – was bad enough.  Cloud computing finally promises a way to consume services without having to worry about maintaining a platform.

That is what cloud management for end-users should focus on (which entails a lot more than a self-service portal where users can pick PC configurations). At the same time, providers will need extremely robust, flexible and easy-to-deploy platforms to render these services, but given how new this industry is and how diverse these services are, these provider platforms will need to be hybrid and diverse.

Tomorrow’s cloud will be as hybrid as today’s datacenter, so we better get used to it. It’s called consumerisation, and with end user departments going out to procure complete cloud services – like SaaS-based CRM – it goes way beyond “bring your iPad to work” days. What we need in IT is a mind shift from thinking we can provide all of the IT our users will ever need, to finding a way to help our users consume all of the IT they could ever want (in a safe, secure and cost-effective way). We call this moving from being the manager of the IT factory to being the orchestrator of the IT supply chain.

This blog is cross-posted at http://ca.com/blogs
Do you tweet? Follow Gregor on Twitter
@gregorpetri .

End of Outsourcing, Death of the Web, Self Managing Clouds? Not so fast, just yet!

Sure, it may all happen, but expect a similar timeframe as for the paperless office

This post originaly was published as column at ITSMportal

Predicting the future is a lot more fun than analyzing the past, but as Mel Brooks might say “A funny thing happened on the way to the future; it changed from what we expected.” And there have been plenty of predictions recently. For starters, Wired Magazine announced  the death of the (browser based) web, predicting it will be replaced by dedicated locally installed desktop or mobile applications – those things we now call “Apps.” As you can imagine, this article prompted a large response by bloggers – and emotions were nearing outrage in some cases. Most of the reaction came from people who simply love their browsers, but one can imagine that many SaaS vendors also had a rough night. Being able to run multiple SaaS applications next to each other, while still offering a rather consistent, integrated look and feel –courtesy of HTML and the common web experience – is pretty fundamental to the long term success of SaaS.

Just a week prior, BusinessWeek ran an article by AT Kearney titled  “The End of Outsourcing (as we know it)”  in which they predict today’s outsourcers will be rapidly replaced by cloud outfits, in the relentless pursuit of economies of scale. They even go as far as to pick winners (Amazon and Google), potential winners (Oracle and SAP) and losers (today’s outsourcers, especially the midsized Indian companies).

AT Kearney sees today’s outsourcing champions, such as HP and Accenture, as hesitant to become cloud providers. Surprisingly (or perhaps not?) the authors do not mention IBM, by far today’s largest player (HP may be a bigger company today, but mainly because they still sell lots of printers and PCs) nor Apple. Now, you may argue that Apple is a consumer company, but as today’s innovations get introduced into consumer markets first, we could predict Apple to move their innovations into the enterprise market soon, offering enterprise versions of cloud offerings like MobileMe (maybe then called MobileInc?). That is, if the world indeed changes as fast as AT Kearney suggests in the BusinessWeek article.

But that is exactly the issue. Today’s big enterprise IT is just not that agile. Much of what is outsourced today still consists of code that was first written 20 years ago. We saw several companies try to “right-size” their pre-relational mainframe databases year after year, always concluding that it either did not have any ROI, simply was not worth the effort or the risk was too high. And as a SAP executive recently said, many large ERP requirements are still far away from anything cloudy.

Now don’t get me wrong, one sure prediction we can make is that tomorrow will be vastly different from today. In fact, today is already vastly different from yesterday, as Phil Nash pointed out “tongue-in-cheek” in a recent  tweet “Welcome to the new decade: Java is a restricted platform, Google is evil, Apple is a monopoly and Microsoft are the underdogs.” But at the same time, companies with expansive IT operations will move slowly, as Brian Stevens CTO of RedHat seems to agree in a recent interview on Bloomberg TV: “It’s going to be several decades before the technology arrives and our [financial services] customers are using the capabilities of cloud more readily.”

We may not directly notice this dichotomy, because our magazines, articles and the enormous flood of social media almost completely focuses on describing new shiny projects (the 20% of the average IT budget) and hardly on the lights being kept on (the 80% of the average IT budget). In fact, the view may be even more distorted, because – as  Marcel den Hartog recently described – some of these older systems are so efficient that they run the majority of the enterprises transactions at a fraction of the total IT cost.

Still not convinced? Have a look at Gizmodo’s animated history of the internet protocol or this infograph. It shows how it took almost 50 years to get to the internet protocol, (which by the way, is still changing). And now some are predicting cloud will change everything this year? (As an aside, the infograph made me realize I am officially one year older than the internet, which I guess is why I – unlike Gizmodo – still know what a coax cable looks like!)

All of this talk about predictions reminds me of the paperless office. Remember all the hype and anticipation around that? Funny thing, it never happened. In fact we now print more than ever before (making HP bigger than IBM) and only this year we finally see a device that may get us to this “paperless dream.” Yes, I mean the iPad, and it is not by coincidence (it never is at Apple) that the only function missing from iOS is … a printing function. The other major change attributed to the iPad (and its smaller sibling, the iPhone) is Wired’s announced return of the App mentioned earlier.

Personally, I believe Apps are a much preferred way to consume content, but the average knowledge worker is not paid to merely consume content. Wouldn’t it be great if you could spend your days reading blogs like this one, and be paid to do so? But we’re not. We’re expected to add value by analyzing, combining, mashing up and composing new content, or by putting this content in a new context. Capturing that into a single app sounds a lot like George’s job on The Jetsons. Just press one button and all the rest is automated! (A bit like James Urquhart’s vison of Self Managing Clouds , interesting and good to think/write about, but still far away.)

In the end, SaaS vendors can rest assured; it will be a while before they are rendered obsolete. Likewise for outsourcers. Sure, outsourcers should be thinking about adding cloud services – such as IaaS — to their portfolios. But at the same time, we see the main pioneer of IaaS, Amazon, taking a distinct step back last week – as David Linthium described– by starting to offer reserved instances. These are machines dedicated to one customer for anywhere between 1 and 3 years (which is longer than most modern outsourcing contracts).

Despite all of the ongoing debate, I am convinced cloud will happen for existing applications, but it will likely happen after we stop writing about it (see 4 p’s in a pot innovation ). Today, cloud will grow in new technology areas (for example, almost all social media sites are in the cloud) or with new things we simply do not yet do (like systems that help George Jetson make smarter decisions through massive data analysis and number crunching). And that is not a bad thing. If we need to choose between deploying cloud to make the systems we already have 5% more efficient or to do 5 new things we do not do today, I think we would all choose the latter. But of course, I am an evangelist and not a CFO. The remaining question is, what would or should today’s CIO do? What are your thoughts or suggestions?

Can the Real Cloud Market Size Please Stand Up?

It seems like every week another sizing of the cloud market is published, and – maybe as to be expected – none of them seem to agree. Let’s have a look at who is saying what, and whether we are comparing apples to apples, or apples and oranges.

We will start by looking at SaaS. The most recent numbers from IDC claim that SaaS revenue will grow 5 times faster than traditional packaged software. This would mean little if traditional packaged software is expected to no longer grow (five times zero would still be zero). Joe McKendrick at ZDNet took IDC’s numbers and extrapolated from them that “very soon, a third of all software will be delivered via cloud.”

This seems to directly contradict Gartner numbers from just a month earlier. In June Gartner released a report stating that “Software as a service (SaaS) will have a role in the future of IT, but not the dominant future that was first thought.” In the same report, Gartner notes that some of the bad habits of the traditional software market – like massive shelf-ware as a result of the desire to get higher discounts by closing enterprise license agreements – have also entered the cloud market. In the press release about the report, Gartner predicts the SaaS market will reach $8.8 billion in 2010 while IDC talks about $13.1 billion SaaS revenue in 2009. I am sure there are some definition differences, but a $5 billion difference on something that they both call “WW SaaS revenue” seems statistically significant, or does it?

One explanation is that IDC specifically includes a large expected growth in SaaS management software, in addition to SaaS application software. The reason is the technology –specifically network technology is now ready for SaaS delivered functionality like remote monitoring and remote configuration management. This is significant because unlike you may expect, the market for “management software” is actually bigger than the market for “application software.” About 10 years ago, incidentally about the same time I moved jobs from ERP to system software, the cost of IT management surpassed the cost of IT applications. So there is a clearly a reason why everyone is so enthusiastic about cloud computing (running this stuff yourself is simply too expensive).

But back to the numbers (working for a software vendor you can understand why we have some interest in these). If SaaS revenue indeed exceeds traditional software within a few years, would that be relevant? Well, not really, because what makes up a SaaS fee is different from a traditional software fee. SaaS includes hosting (hardware), network, backup, new releases, support, change management etc. Software costs may actually be only 10 to 20% of the total. Ideally we should compare SaaS costs with the TCO (Total Cost of Ownership) of applications. This would be a good idea, if people actually knew the TCO of their applications. But that is another topic all together.

What a difference a day makes!
When we look at the total cloud market the differences were even more astounding. Within a day of each other Gartner estimated the Cloud market to be $148 billion by 2014. This basically contradicts IDC, which just two days earlier estimated $55 billion in 2014. John Treadway highlights one reason why the Gartner numbers are so high in in his post “Why Gartner’s Cloud Numbers Don’t Add Up (Again!)”. He points out that Gartner includes Google AdWords advertising revenue in their cloud market numbers.

So, at the high level, these numbers do not add up. But that is not such a big issue because if you take a close look at individual markets, you see huge differences in SaaS adoption. For example, most customers now use SaaS solutions for Project and Portfolio Management (PPM), while still relying on on-premise solutions for PC configuration, anti-virus and ERP. While PC configuration was mainly held back by technical limitations, ERP suffered more from the traditional attitude of typical ERP buyers. As usual, the technical limitations around the first are likely to be resolved faster, than the user culture issues around ERP.

In the end, trying to put a number on cloud adoption or the overall market opportunity for cloud can feel like we’re looking into a crystal ball. We can even say that the numbers are not of much help. Cloud is growing fast – on that point it seems industry analysts, vendors and end users all agree — but exactly how fast we will not know for sure until after it has happened. For many looking to rely on these numbers for strategic, operational or market planning, this may seem like an issue, but we should ask ourselves, is it really? Consider this: Two shoe salesmen flew into a third world country. After getting off the plane, both phoned their home offices. One said “I am coming back home, because nobody wears shoes here,” while the other salesman sent a telegram asking for more supplies and five colleagues to join him. He saw a huge market opportunity because nobody wears shoes there!). Same data, different conclusion.

It’s also wise to look at what we’re comparing. In the cloud market, it does not really matter whether cloud is bigger than traditional software (as we know it today) in 2014 or 2040. Instead, what does matter is that we’re focusing on investing more in functionality that does great things for the business rather than focusing on the gear that manages the plumbing.

On Cloud Lock-in, Standards, Decoupling and why SaaS does not scale

With security and legal concerns being slowly addressed by the industry, lock-in and standards are rapidly becoming the biggest concerns regarding cloud computing. If the cloud industry is to make good on its promise, these will need to somehow be addressed. Let’s examine some recent developments.

Interesting to see how, just a week after my blog on “The Principles and Perils of Vendor Lock-in” *1 several vendors made announcements seemingly supporting my suggested approach. For example, after hinting at the potential benefit of decoupling SaaS and PaaS from its underlying Infrastructure (IaaS) layers, Microsoft announced it is making Azure available as a PaaS platform to several large IaaS providers *2. Now I am sure this had nothing to do with my blog on preventing lock-in and all to do with a desire in Redmond to increase market share for their PaaS platform, which ironically – if too successful – may even increase lock-in. But the move will offer customers who select Microsoft’s PaaS platform a choice of vendors for the underlying Infrastructure services (IaaS).

At the same time NASA and Rackspace announced they are joining forces around an open source platform for Private Clouds called OpenStack *3. Rackspace’s initiative is no doubt as commercially motivated as Microsoft’s’. If Rackspace –in my view correctly – expects that many private clouds in the foreseeable future will start to source additional capacity (cloudburst) from public clouds, then having these private clouds be based on the same architecture as their public cloud offering will help Rackspace. NASA’s motives stem from the US governments cloud stimulus approach *4 , a specific stated goal of which is “to accelerate the creation of cloud standards”. If history is to repeat itself, we can expect to first see industry standards lead to a “plug compatible cloud market”, before a serious “open standards cloud market” will take shape. As NASA is determined to have workable cloud standards a lot faster than the decade or so it took to get a man on the moon, it is understandable they see the Rackspace route as a viable shortcut. This is also understandable because agreeing on open cloud standards today would be as difficult as agreeing 3D TV standards back in the days of the black & white moon landing broadcasts. (And, reader beware, if there ever was a time to keep options open and not lock yourself into what looks to become an early standard , it would be today).

A decoupled cloud

For many readers my recommendation to prevent lock-in by decoupling the choice of application (SaaS) and platform (PaaS) vendors from the underlying choices of infrastructure (IaaS) vendors was pure heresy. Their logic was this: if you want control over underlying layers you should not embark on cloud computing because the whole idea of cloud computing is that someone else is responsible for the underlying layers. But that’s like saying that: if you don’t want to buy clothes that were made by under aged children; you should get a sewing machine and make your own clothes.

Others struggled with imagining what such a decoupled cloud would look like in practice. Luckily, also last week (it was indeed an eventful week for cloud computing) a first real live example of such a decoupled offering went live. Skygone Inc. announced they were offering a choice of GIS (Geographical Information System) services *5 by aggregating solutions from several GIS software vendors across a choice of infra-structure platforms and vendors. Companies in need of such geographical information – which is a complex and specialized area, beyond the expertise and interest of most internal IT departments – can now simply source this without locking themselves into a specific vendor or platform. (Disclosure: Skygone uses as underlying platform Applogic from 3Tera, now owned by my employer CA Technologies.)

Several analyst firms predicted early on that this type of “brokering of cloud services” would become an important market force. But in recent months –maybe under the influence of several self-proclaimed 100 pound gorilla’s entering the cloud market – the analyst community became very quiet about the concept, which is a shame because it also addresses the fact that in an enterprise context “SaaS does not scale”.

  • SaaS does not Scale?
  • Now before readers get all wound up (again), with “SaaS does not Scale”, I do not mean that SaaS applications cannot scale to service millions of users. They do already, although some more successful and reliable than others. I mean that the average enterprise or government organization, which typically has a portfolio of several hundreds or even thousands of applications, can simply not afford to source these from a similar number of SaaS providers. The mandatory auditing of the infrastructure and processes of all these providers would simply not be feasible, also as a leading analyst firm just pointed out that a SAS70 certificate is no replacement for such mandatory due diligence *6a. They did so at about at the same time they suggested that for many a SaaS vendor it would make sense to partner with IaaS vendors for delivery of their services *6band that the traditional SaaS market may not grow to be as big as many initially expected *6c (shows once more that predicting developments and/or placing customer bets in a brand new area like cloud computing continues to be a risky business).


A better way

Summarizing the described mix and match approach of a decoupled, brokered cloud aims to allow enterprises to select the applications they need from several SaaS vendors, pick the platforms they like from a choice of PaaS vendors and deploy these across their choice of selected and audited IaaS vendors, without running into lock-in or scalability issues.

Now it is important to understand that this approach does not in any way, shape or form resemble the old way IT used to work. Let’s use an analogy from the consumer IT market to describe the difference:

  • IT, the old way: As a consumer you would go to a computer store to pick a software package, let’s say a cooking application. From the 20 available offers you pick one (likely the one with the nicest picture on the box), only to arrive home and discover your PC has a release of the operating system / database / browser that is not supported. After fixing this (there goes the weekend), you still cannot get it to run. You solicit some consulting from your neighbor/nephew/colleague; while your spouse remarks that at this rate you will be eating take out for another month (no pressure!). Finally during week 3 you get it to work, although printing still has it quirks. You learned a lot more about your PC, but little about cooking. One month later you buy a new PC and strangely the whole thing stops working again. Luckily the vendor sends you an email in which they offer an upgrade that runs on your new PC. Comparing it to the cost of takeout, you decide to buy the upgrade.
  • IT, the new way: You feel hungry, without leaving your seat you visit the appstore on your phone, they offer 60 cooking applications, you pick the one most downloaded (after reading some of the user comments). You prepare your first dish. It is too salty. You blame the application, remove it, and pick another one. That tastes better. You decide whether you use the free version (that includes a automatically printed shopping list for the supermarket chain sponsoring the app) or you pay 20 cents per recipe cooked.

The decoupled cloud experience we are aiming for should of course feel like the second scenario. Also note how in the first example we talked mainly about technology and in the second mainly about cooking. Somehow we in IT moved from talking about what our companies do (selling soup, soap or insurance) to mainly discussing technologies (like SOA, SOAP and yes: Cloud).

In other words we need to change from being mainly Supply Driven, with IT in the role of factory managers running production of services, to a Demand-Driven IT organization with IT in the role of a supply chain manager, finding the best way to source the functionality for the business, preferably without locking our company into a dead-end street. End goal is being able to deliver the 20% that really differentiates our company, while at the same time being able to source the 80% that is pretty much the same for all companies.

That type of agility is the real promise of cloud computing.

This post originally appeared on July 26 at ITSMportal.com  

Notes:

*1 The Principles and Perils of Vendor Lock-In

*2 Microsoft announced it is making Azure available as a PaaS platform to several large IaaS providers

*3 NASA and Rackspace announced they are joining forces around an open source platform for Private Clouds called OpenStack

*4 The US government investments in cloud computing could be seen as a modern day industry stimulus package. In my view current efforts of NASA and the like may have as deep an impact on cloud computing as the cold war DoD budgets had on the development of computer networks and the Apollo project had on technology advancement in general.

*5 Skygone Inc. announced offering a choice of GIS (Geographical Information System) services

*6a SAS 70 is Not Proof of Security, Privacy, or Continuity Compliance

*6b Public Cloud Infrastructure Helps SaaS Vendor Economics

*6c Organizations Need to Re-Evaluate the Rationale for SaaS

Vendor lock-in and cloud computing

This blog originally was published at ITSMportal.com on July 14st , 2010

IT vendor lock-in is as old as the IT industry itself. Some may even argue that lock-in is unavoidable when using any IT solution, regardless of whether we use it “on premise” or “as a service”. To determine whether this is the case, we examine traditional lock-in and the to-be-expected impact of cloud computing.

Vendor lock-in is seen as one of the potential drawbacks of cloud computing. One of Gartner’s research analysts recently published a scenario where lock-in and standards even surpass security as the biggest objection to cloud computing. Despite efforts like Open Systems and Java, we have managed to get ourselves locked-in with every technology generation so far. Will the cloud be different or is lock-in just a fact of live we need to live with? Wikipedia defines vendor lock-in as:
In economics, vendor lock-in, also known as proprietary lock-in, or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.
Let’s examine what lock-in means in practical terms when using IT solutions and how cloud computing would make this worse or better. For this we look at four dimensions of lock-in:

Horizontal lock-in: This restricts the ability to replace a product with a comparable or competitive product. If I choose solution A (let’s for example take a CRM solution or a development platform), then I will need to migrate my data and/or code, retrain my users and rebuild the integrations to my other solutions if I want to move to solution B. This is a bit like when I buy a Prius, I cannot drive a Volt. But it would be nice if I can use the same garage, loading cable, GPS, etc. when I switch.

Vertical lock-in: This restricts choice in other levels of the stack and occurs if choosing solution A mandates use of database X, operating system Y, hardware vendor Z and/or implementation partner S. To prevent this type of lock-in the industry embraced the idea of open systems, where hardware, middleware and operating systems could be chosen more independently. Before this time hardware vendors often sold specific solutions (like CRM or banking) that only ran on their specific hardware / OS etc. and could only be obtained in their entirety from them. So a bit like today’s (early market) SaaS offerings, where all needs to be obtained from one vendor.

Diagonal (of inclined) Lock-in: This is a tendency of companies to buy as many applications as possible from one provider, even if his solutions in those areas are less desirable. Companies picked a single vendor to make management, training and especially integration easier but also to be able to demand higher discounts. A trend that let to large, powerful vendors, which caused again higher degrees of lock-in. For now we call this voluntary form of lock-in diagonal Lock-in (although “inclined”- a synonym for diagonal – may describe this better).

Generational Lock-in: This last one is as inescapable as death and taxes and is an issue even if there is no desire to avoid horizontal, vertical or diagonal lock-in. No technology generation and thus no IT solution or IT platform lives forever (well, maybe with exception of the mainframe). The first three types of lock-in are not too bad if you had a good crystal ball and picked the right platforms (eg. Windows and not OS/2) and the right solution vendors (generally the ones that turned out to become the market leaders). But even such market leaders at some point reach end of life. Customers want to be able to replace them with the next generation of technology without it being prohibitively expensive or even impossible because of technical, contractual or practical lock-in.

The impact of cloud computing on lock-in
How does cloud computing, with incarnations like SaaS (software as a service), PaaS (platform as a service) and IaaS (infrastructure as a service) impact the above? In the consumer market we see people using a variety of cloud services from different vendors , for example Flickr to share pictures, Gmail to read email, Microsoft to chat, Twitter to Tweet and Facebook to … (well, what do they do on Facebook?), all seemingly without any lock-in issues. Many of these consumer solutions now even offer integration amongst each other. Based on this one might expect that using IT solutions “as a service” in an enterprise context also leads to less lock-in. But is this the case?
Horizontal: For the average enterprise moving from one SaaS solution to another is not so different from moving from a traditional software application to another, provided they agreed whether and how their data can be transferred. What does help is that SaaS in general seems easier and faster to implement and that it is not necessary for the company to have two sets of infrastructure available when migrating.

For PaaS it is a very different situation, especially if the development language is proprietary to the PaaS platform. In that case, the lock-in is almost absolute and comparable to the lock-in companies may have experienced with proprietary 4GL platforms, with the added complexity that with PaaS also the underlying infrastructure is locked-in (see under vertical).

Horizontal lock-in for IaaS may actually be less severe than lock-in to traditional hardware vendors as virtualization – typical for any modern IaaS implementation – isolates from underlying hardware differences. Provided customers do not lock themselves in to a particular hypervisor vendor, they should be able to move their workloads relatively easy between IaaS providers (hosting companies) and/or internal infrastructure. A requirement for this is that the virtual images can be easily converted and carried across, a capability that several independent infrastructure management solutions now offer. Even better would be an ability to move full composite applications (more about this in another post).

Vertical: For SaaS and PaaS vertical lock-in is almost by definition part of the package as the underlying infrastructure comes with the service. The good news is the customer does not have to worry about these underlying layers. The bad news is that if the customer is worried about the underlying layers, there is nothing he can do. If the provider uses exotic databases, dodgy hardware or has his datacenter in less desirable countries, all the customer can do is decide not to pick that provider. He could consider contracting upfront for exceptions, but this will in almost all case will increase the cost considerably, as massive scale and standardization are essential to business model of real SaaS providers.

On the IaaS side we see less vertical lock-in, simply because we are already at a lower level, but ideally our choice of IaaS server provider should not limit our choice of IaaS network or IaaS storage provider. For storage the lesson we learned the hard way during the client server area –for enterprise applications logic and data need to be close together to get any decent performance – still applies. As a result the storage service almost always needs to be procured from the same IaaS provider as used for processing. On the network side most IaaS providers offer a choice of network providers, as they have their datacenter connected to several network providers (either at their own location or at one of the large co-locators).

Diagonal or inclined: The tendency to buy as much as possible from one vendor may be even stronger in the cloud than in traditional IT. Enterprise customers try to find as single SaaS shop for as many applications as possible. Apart from the desire for out of the box integration, an – often overlooked – reason for this is that customers need to regularly audit the delivery infrastructure and processes of their current SaaS providers, something which is simply unfeasible if they would end up having hundreds of SaaS vendors.

For similar reasons we see customers wanting to buy PaaS from their selected SaaS or IaaS vendor. As a result vendors are trying to deliver all flavors, whether they are any good in that area or not. A recent example being the statement from a senior Microsoft official that Azure and Amazon were likely to become more similar, with the first offering IaaS and the second likely to offer some form of PaaS soon.

In my personal view, it is questionable whether such vertical cloud integration should be considered desirable. The beauty of the cloud is that companies can focus on what they are good at and do that very well. For one company this may be CRM, for another it is financial management or creating development environments and for a third it may be selling books – um, strike that – hosting large infrastructures. Customers should be able to buy from the best, in each area. CFOs do not want to buy general ledgers from CRM specialists, and for sure sales people don’t want it the other way around. Similar considerations apply for buying infrastructure services from a software company or software from an infrastructure hosting company. At the very least this is because developers and operators are different types of people, which no amount of “devops training “ will change (at least not during this generation).

Generational: As with any new technology generation people seem to feel this may be the final one: “Once we moved everything to the cloud, we will never move again.” Empirically this is very unlikely – there always is a next generation, we just don’t know what it is (if we did, we would try and move to it now). The underlying thought may be: “Let the cloud vendors innovate their underlying layers, without bothering us”. But vendor lock-in would be exactly what would prevent customers from reaping the benefits of clouds suppliers innovating their underlying layers. Let’s face it, not all current cloud providers will be innovative market leaders in the future. If we were unlucky and picked the wrong ones, the last thing we want to be is locked-in. In today’s market picking winning stocks or lotto numbers may be easier then picking winning cloud vendors (and even at stock picking we are regularly beaten by not very academically skilled monkeys).

Conclusion
My goal for this post was to try and define lock-in, understand it in a cloud context and agree that it should be avoided while we still have a chance (while 99% of all business systems are not yet running in the cloud). Large scale vertical integration is typical for immature markets – be it early-day cars or computers or now clouds. As markets mature companies specialize again on their core competencies and find their proper (and profitable) place in a larger supply chain. The lock-in table at the end, where I use the number of padlocks to indicate relative locking of traditional IT versus SaaS, PaaS and IaaS, is more meant for discussion and improvement than as an absolute statement. In fact our goal should be to reduce lock-in considerably for these new platforms. In a later post I will discuss some innovative cross cloud portability strategies to prevent lock-in when moving large numbers of solutions into the cloud, stay tuned.

PS Not that I for a minute think my blogs have any serious stopping power, but do not let the above stop you from moving suitable applications into the cloud today. It’s a learning experience that we will all need as this cloud thing gets serious for serious enterprise IT (and I am absolutely sure it will, as the percentage of suitable applications is becoming larger every day). Just make sure you define an exit strategy for each first, as all the industry analysts will tell you. In fact, even for traditional IT it always was a good idea to have an exit strategy first (you did not really think these analysts came up with something new, did you?).

Might the cloud prove Thomas J. Watson right after all?

In 1943 former IBM president Thomas J. Watson allegedly *1 said: “I think there is a world market for maybe five computers”. Will cloud computing prove Watson to be right after all?

Anyone who visited a computer-, internet- or mobile-conference in recent years, is likely to have been privy to someone quoting Watson. Most often to show how predicting the future is a risky endeavor. But is it? Maybe five for the world is not so crazy after all?

Now don’t get me wrong, I am not suggesting there will be less digital devices in the future. In fact there will be more than we can imagine (phones, ipads, smart cars and likely several things implanted into our bodies). But the big data crunching machines that we – and I suspect Mr. Watson – traditionally think of as computers are likely to reduce radically in numbers as a result of cloud computing. One early sign of this may be that a leading analyst firm – who makes a living out of publishing predictions – now foresees that within 2 years, one fifth of business will own no IT assets*2.

Before we move on, let’s further define “computer” for this discussion. Is a rack with six blades one computer or six? I’d say it is one. Same as I feel a box (or block) hosting 30 or 30000 virtual machines, is still one computer. I would even go so far that a room with lots of boxes running lots of stuff could be seen as one computer. And let’s not forget that computers in the days of Mr. Watson were as big as rooms. So basically the proposed idea is: cloud computing may lead to “a world market with maybe five datacenters”. Whether these will be located at the bottom of the ocean (think we have about 5 of those*5), distributed into outer space to solve the cooling problem or located on top of nuclear plants to solve the power problem, I leave to the hardware engineers (typical implementation details).

Having five parties hosting datacenters (a.k.a. computers) to serve the world, how realistic is this? Not today, but in the long run, let’s say for our children’s children. It seems to be at odds with the idea of grids and the use all this computing power doing little or nothing in all these distributed devices (phones, ipads). But does that matter. Current statistics already show that a processor in a datacenter with 100.000 CPU’s is way cheaper to run than that same processor in a datacenter with 1000 CPUs. But if we take this “bigger is better” (Ough this hurts, at heart I am a Schumacher “small is beautiful” *6 fan) and apply it to other industries, companies would logically try and have one factory. So Toyota would have one car factory and Intel one chip factory. Fact is they don’t , at least not today. Factors like transport cost and logistical complexity prevent this. Not to mention that nobody would wont to work there or even live near theseand that China may be the only country big enough to host these factories (uhm, guess China may be already trying this?).

But with IT we theoretically can reduce transport latency to light speed and logistical complexity in a digital setting is a very different problem. Sure managing 6000 or 600.000 different virtual machines needs some thought (well maybe a lot of thought), but it does not have the physical limitations of trying to cram 60 different car models, makes and colors through one assembly line. If instead of manufacturing we look at electricity as a role model for IT – as suggested by Nicholas Carr – then the answer might be something like ten plants per state/country (but reducing). Now we need to acknowledge that electricity suffers from the same annoying physical transport limitations as manufacturing. It does not travel well.

So guess my question is: What is the optimal number?
How many datacenters will our children’s children need when this cloud thing really starts to fly.

  • A. 5 (roughly one per continent/ocean)
  • B. .5K (roughly the number of Nuclear power plants (439))*3
  • C. 5K (roughly 25 per country)
  • D. .5M (roughly/allegedly the current number of Google servers)*4
  • E. 5M (roughly the current number of air-conditioned basements?)
  • F. 5G (roughly the range of IP4 (4.2B))

Please post you thoughts / votes / comments below

*1 Note: Although the statement is quoted extensively around the world, there is little evidence Mr Watson ever made it http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquote
*2 http://www.gartner.com/it/page.jsp?id=1278413  
*3 http://www.icjt.org/an/tech/jesvet/jesvet.htm
*4 http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/  
*5 http://geography.about.com/library/faq/blqzoceans.htm
*6 http://en.wikipedia.org/wiki/Small_Is_Beautiful   

How the cloud give the consumerisation of IT a whole new meaning

This blog originally was published at ITSMportal.com on May 31st , 2010

The cloud essentially “consumerizes” all of IT, not just relatively unimportant bits like procuring personal hard- and software. This requires a whole rethinking of corporate IT, as the idea of any master design becomes unattainable. How can IT as a species survive this trend as it may render the education of a whole generation of IT-ers irrelevant? On the brighter side – it really caters for the talents of today’s teenagers: consumption as a lifestyle.

The idea of consumerisation – users being allowed to freely procure their own personal hard- and software – has been around for a while. But few CIO’s and even less heads of IT Operations have embraced it. Other than some token adoption, where users could choose between an iPhone or a Blackberry or where users got a personal budget to order from the company supplied catalog of pre-approved hardware, we see little adoption of the concept. The idea is that users can go to any consumer store or webshop and order any gadget they like, be it an iPad, laptops, printer or smart phone and configure these basically while still in the store to access their corporate mail, intranet and company applications. The idea originated when people wanted to use their 24 inch HD pc with 4 processors and mega memory – all essential to enjoy modern home entertainment and video and far superior to company standard issue equipment- to also do some work.

Cloud computing now makes such a consumer approach also possible at the departmental level. Department selecting and using non corporate approved or endorsed SaaS based CRM applications are the most commonly used example. But more interesting are the cases where departments – tired of waiting for their turn in the never reducing application backlog of corporate IT – turned to a system integrator to build a custom cloud application to meet their immediate needs. Several system integrators indicate that they have more and more projects where no longer IT, but the business department, is their prime customer. Contracts, SLA’s and even integrations are negotiated directly between the SI and the business department, in some cases IT is not even involved or aware.

Now this is not a new phenomenon. We saw the exact same thing when PCs and departmental servers were introduced. Departments went off on their own and bought “solutions” from vendors popping up like the proverbial poppy seeds and often disappearing just a quickly after (remember Datapoint, Wang, Digital? And those were the ones that lasted). Guess who the business expected to clean up (integrate) the mess they left behind? Yes, the same IT departments they bypassed in the first place. One may even argue that: if IT had not been so busy cleaning up this mess over the last 15 years, they would have had a much better chance at building an integrated solution that actually did meet business’s need. I am not of that opinion. With ERP we got this chance (and the associated billions) and still did not manage to keep up with the requirements, some things are just too vast, complex or simply change to fast to be captured in any master design.

So back to consumerisation. Although the trend has been far from whole heartily embraced by most corporate IT, it is continuing. In my direct environment I see several people who, instead of plugging their laptop into the corporate network at the office, take a 3G network stick to work. For around 20 Euros a month this gives them better performance accessing the applications they care about, not to mention it gives them access to applications most corporate IT department do not care for, like facebook, twitter, etc. Question is off course, can they do their work like that? Don’t they need all day, full time access to the aforementioned fully vertically integrated ERP system? The answer is No. First of all, the vertically integrated type of enterprise that ERP was intended for, no longer exist. Most corporations have taken to outsourcing distribution to DHL or TNT, employee travel to the likes of American Express, HR payroll and expenses to XYZ, etc. etc. The list goes on and on.

All these external service providers support these services with web based systems that can be accessed from anywhere, inside and outside the company firewall. At the same time, the remaining processes that occur in the corporate ERP system are so integrated that they hardly require any manual intervention from employees. Consequently employees don’t need to spend their time doing data entry or even data updates or analysis on that system. Any remaining required interaction is facilitated by directly interfacing with the customer via the web shop or via other web based systems. One could say that the world moved from vertically integrated manufacturing corporations to supply chain connected extended enterprises.

The question I will address in my next post is how does the cloud enabling consumerisation for enterprise applications play a role in this and what this means for IT moving forward.

On the supply side of IT, it means applications are best delivered as easily consumerable services to employees and others (partners, customers, suppliers). One large European multinational is already delivering all their new applications as internet (so not intranet) applications. Meaning any application can be accessed from anywhere by simply entering a URL and doing proper authentication. Choosing which applications to provide internally is based on whether there are outside parties willing and capable to provide these services or whether the company can gain a distinct advantage by providing the service themselves.

When speaking about consuming services, one should try and think broader than just IT services. The head of distribution may be looking for a parcel tracking system, but when asking the CEO or the COO they are more likely to think of a service in terms of something a DHL or TNT delivers. Services such as warehousing, distribution, but also complaint tracking, returns and repairs, or even accounting, marketing and reselling, all including the associate IT parts of those services. It is the idea of everything as a services, but on steroids (XaaSoS). Please note that even when an organization decides to provide one of these services internally, they can still source the underlying infrastructure and even applications “as a service” externally (this last scenario strangely enough is what many an IT person seems to think of exclusively when discussing cloud computing).

On the demand side of IT the issue is an altogether other one. How do we warrant continuity, efficiency and compliance, in such a consumption oriented IT World. If it is every man (or department) for themself, how do we prevent suboptimisation, In fact , how do we even know what is going on in the first place. How do we know what services are being consumed. This is the new challenge, and it is very similar to what companies faced when they decided to not manufacture everything themselves anymore, abandoning vertical integration where it made sense and taking a “supply chain” approach. Cloud computing is in many aspects a similar movement, and also here a supply chain approach looks like the way to go.

Such a supply chain approach means thoroughly understanding both demand and supply, matching the two and making sure that the goods – or in this case services – reach the right audience at the right time (on demand). IT has invested a fair amount of time and effort in better ways and methodologies to understand demand. On the supply side, IT till now assumed they were the supplier. In that role they used industry analysts to classify the components required, such as hardware and software. In this new world they need to start thoroughly understanding the full services that are available on the market. An interesting effort worth mentioning here is the SMI (Service Measurement Index) an approach to classify cloud services co-initiated by my employer, CA technologies and lead by Carnegie Mellon University.

After having gained an understanding of both demand and supply, the remaining task is “connecting the dots”. This sounds trivial but is an activity that analysts estimate becoming a multi-billion industry within just a few years. It includes non-trivial tasks like identifying which users are allowed to do which tasks in this now open environment and optimizing the processes by picking resources that have the lowest utilization and thus cost. Because going forward scarcity will determine price especially in the new cloud world (which resembles Adam Smith’s idea of a perfect open market a lot closer than any internal IT department ever did or will do).

Now off course all of the above won’t happen overnight. Many a reader (and with a little luck the author) will have retired by the time today’s vertically integrated systems – many of which are several decades old and based on solid, reliable mainframes – will have become services that are brokered in an open cloud market. A couple of high profile outages may even prolong this a generation or two more. But long term I see no other way. Other markets (electricity, electronics, publishing and even healthcare) have taken or are taking the same path. It is the era of consumption.

PS Short term, however, the thing we (IT) probably need most is a new diagraming technique. Why? From the above it will be clear that – in such a consumerised world – architecture diagrams are a thing of the past. And an IT person without a diagram is like a fish without water . We need something that allows us to evolve our IT fins into feet and our IT chews into lungs, so we can transition from water to land and not become extinct in the process. One essential aspect will be that unlike pictures of clouds and very much like real clouds, the diagrams will need to be able to change dynamically, much like pictures in a Harry Potter movie (it’s magic). Who has a suggestion for such a technique ?

Why Cloud spells C.o.m.p.e.t.i.t.i.o.n. for the average IT department

This blog was originally posted at ITSMportal by columnist Gregor Petri on April 19th, 2010

Competition seems to be a controversial topic for many in IT. We rather see ourselves as service providers, but typically as the only – or at least the preferred – service provider. The reason to start this new column series on ‘The impact of cloud computing on IT service management’ with this controversial topic is that there seem to be two independent train of thoughts around cloud computing. On the one hand cloud computing is seen as a way to make traditional IT more efficient, on the other it is seen as a way for users to source IT solutions directly. The first group talks about Infrastructure as a Service and private clouds, while the second talks less but rapidly implements Software as a Service solutions, often bypassing the IT department in the process. Both groups are implementing cloud computing, but from very different starting points. Somehow they need to start talking again; otherwise we either get ‘strangers passing in the night’ or ‘a train wreck waiting to happen’.

For the first time in its history IT is facing outside competition. Sure, outsourcing was no picnic, but outsourcing was more like subcontracting to a ‘friendly’ supplier than real competition. With cloud computing users can simply go outside to procure the services they need. I am currently watching an interesting example close by. While the internal IT department is scrambling to offer an in-house social media type collaboration environment, one user department already went outside. To protect the innocent we won’t mention whether this was a production, sales, marketing, R&D or other department, but you get the idea. Starting in Australia, furthest away from corporate headquarters – both in distance and time-zones – they set up a collaboration environment with an outside cloud provider. In just a few weeks every member of this global department started sharing their activities, thoughts, projects and enjoying the typical communication that people enjoy on social networks.

As this cloud service is low cost (even starting with free), easy to use and it offers anywhere, anytime access also from non HQ supported devices such as iPhones and home PCs, the chances of IT winning this department back for their corporate service are dim at best. One good soul tried to help IT by requesting a similar online watering hole from corporate IT. As instructed he filled out a service request form at the central service desk , but to date he is still awaiting the first response from IT (a first response likely to be questions about priorities, about what executive will sign this off and what cost center it needs to be charged to). Now this may not be a mission critical enterprise system, but similarly we see user departments contracting directly with system integrators to build new enterprise solutions on a PaaS platform. My point is that many IT departments still seem to be in denial on the realities of this new competitive world called cloud. Time for a wake-up call.

Now IT is not the first department in corporate history to face some serious competition. Here is a wake-up analogy from the consumer electronics industry (if you’re not big on analogies, just substitute ‘application’ for ‘TV and ‘IT’ for ‘factory’ and ‘cloud’ for Japan). About two decades ago a company from my country was global market leader in color TVs. Back then the average life cycle of a TV, before a new model would arrive, was 3.5 years. The average price was fairly stable at around 800 Euro’s and basically all components were custom designed and produced in house. Becoming the head of a TV factory was the ultimate career dream for many in my home town. Just a few years later, after Japan and Korea entered the global market, prices had dropped by 40% (and continued halving every two years), new models replaced old ones every 6 months and innovations such as remote controls, stereo, PiP and c-text determined market leadership. Our local multinational nearly did not make it through this transition. To cope they introduced ‘just in time”, ‘total quality’ and started ‘design for manufacturing’, heavily utilizing standard off the shelf components to accommodate the much shorter life cycles. And to top things off they stopped producing the main component (CRT’s) in house, instead they created a production joint venture (a.k.a. a ‘cloud’) with their biggest competitor.

Overnight the head of manufacturing had to change from being ‘the king of low cost production’ to ‘the fastest orchestrator of the supply chain’. Agility became the word. But agility did not replace the need for low cost, high quality or advanced innovation. It was about delivering all of those at the same time and at neck break speed. Some industries decided this was just too hard and stopped in-house manufacturing all together, others saw it as an opportunity for differentiation. In my view the above analogy graphically illustrates the roller coaster ride IT is about to get on.

Many of the needed skills and tools, such as smarter sourcing, resource pooling, and service oriented architectures; we have already been trialing in the past few years. Under the banner of agile development we even have had a first go at coping with rapid change, despite the overwhelming complexity of enterprise IT. In addition there are many manufacturing best practices, Lean being the obvious one, that IT can benefit from (see also ‘How lean is your cloud’).

The question in my view is: is IT ready and willing to give up their manufacturing role (provider of services) and transition into an orchestration/supply chain role. Essentially engaging in both mentioned conversations: making enterprise IT more efficient, while at the same time enabling the enterprise to leverage readymade market/cloud services. Interested in your thoughts and comments.