Tune into the Cloud: Imagine

Tune into: 2016

By the time you’ll be reading this, you will probably have the Christmas tree all lit up and it will have become clear whether the plan of Radio jock Stefan Stassen to – in response to the still unbelievable November events in Paris – vote John Lennon’s idealistic “Imagine” to the top of the annual Dutch Top 2000 has succeeded.

It may seem a stretch to bridge from terrorism to technology as we do in this extra-long edition of Tune into the Cloud, but it is safe to assume that in the coming years technology and security will be more closely linked than ever. Just a few years ago futurologists would get enthusiastic about drones able to deliver a pizza, but today we read mainly about drones eliminating terrorists and let’s all hope that the first headlines about suicide drones are still far enough in the future to leave enough time for – equally technology-based – countermeasures.

When extrapolating this trend, we may in fact come full circle. Not too long ago the majority of technological innovation was driven (or at least financed) by the military security and arms industry. Arpanet – the first implementation of TCP / IP technology and a predecessor of today’s Internet – was build on funding from the Advanced Research Project Agency (ARPA) of the US Department of Defense. Not too much later it was the race to the moon and the dream of space exploration that lead to many of the innovations that we still use today. After the freezing of these budgets, the enterprise sector took the role of engine of innovation. Do you remember where you saw your first color monitor, printer or laptop? Probably at the office of a bank, insurance company or other type of commercial enterprise. Today however, most of us have wider bandwidth, larger screens (with higher resolution) and faster processors at home than in our corporate offices. With the advent of broadband Internet, the role of innovation driver has been largely taken over by the consumer industry, with giants like Amazon, Facebook and Google leading the pack. It was in fact Facebook (and not an IBM, NEC or HP) that was the major driver behind the Open Compute Project. An initiative that – through the deployment of open source concepts – seeks to determine the future of the data center.

But as mentioned, pretty soon the security industry may pick up the baton again.
As a result the traditional enterprise sector has less and less say in innovation and is destined to resolve their problems with the crumbs left over from the projects of the mentioned consumer oriented internet players. Cloud is actually a telling example of this. More and more companies that originally embraced the cloud as “a good idea that we are going to implement privately, because we are an enterprise and have enterprise-grade requirements” are now finding that the leading public clouds of today are actually more enterprise-grade than what they managed to build themselves. And the same goes for their traditional service providers, who said, “Oh, but if enterprises can not do this yourself, we do it for them, we are going to build hosted/virtual private clouds.” Also these organizations are finding that competing head-on with mega-providers like Amazon and Azure has slightly suicidal tendencies and therefore they are turning en masse to offering to manage customer workloads on top of the clouds of hyperscale providers and launching added value services in areas such as access (network), management, governance and – yes there it is again – security.

Security as a value added service around hyperscale cloud providers may at first sight seem strange. Did we not just all conclude that hyperscale cloud providers (not to be confused with the traditional vendor on the corner who also offers some cloud services on the side) in most cases implemented better and more comprehensive security measures than most internal enterprise data centers could even dream of? The Federal CIO of the United States even compared storing data in government data centers with saving money in a mattress and storing data in the cloud with saving it in a professional banking facility. Now equations involving banks and safety in recent years have been awkward as the general public is confused whether they should be more worried about harm caused by bank robbers or by bankers. But the fact remains that the sentiment about clouds and security is rapidly changing. A trend that is likely to continue even if some large cloud disasters and break-ins – something that given the sharp increase in cloud use will inevitably happen – start to appear in the press.

In many of these incidents, the problem namely wont be the safety of the specific cloud. It will be the way in which this cloud was used. One of our official predicts for next year is in fact that “Through 2020, 95% of cloud security failures will be the customer’s fault”. In other words, customers should be more concerned about protecting themselves against phishing, skimming , social engineering and other operator errors than about robbers stealing data from the vaults of their hyperscale cloud providers. In addition, most companies use multiple clouds (for example the clouds of tens of SaaS providers) and protection across multiple clouds is becoming equally important as protecting a single cloud.

The product segment of the year (not an official category for us, so we can imagine it here on the spot) will likely be Cloud Access Security Brokers. We predicted recently that “By 2020, 85% of large enterprises will use a cloud access security broker product for their cloud services, which is up from fewer than 5% today”. So this category of products is certainly worthwhile looking into during next year. The vendors are a mix of ripe and green (mostly green) with very diverse backgrounds. Some come from a network scanning angle, others from an identity management and single sign-on angle and still others have a history in encryption or in governance, risk and compliance (GRC). The products provide visibility (which cloud applications do we use), compliance (who can use what and where is the data), data protection (including classification and encryption) and threat analysis (including behavioral analysis). You’d almost think that we are talking again about the aforementioned military applications, but these are really tailored towards business users. Several large traditional vendors are therefore ready to enter this market, Microsoft recently acquired a CASB solution provider and IBM combined a number of existing and new solutions into a CASB bundle.

This leaves undiminished that the role of the enterprise in the technological landscape is becoming smaller. Last year we predicted already that by 2018 enterprises will only own half of the global DC infrastructure and we recently added to this that only half of the information technology spending in companies will reside under the IT department. The other half falls directly under the business departments. Incidentally, that percentage today is already 42%. And that in turn has significant impact on the role of the CIO in the coming years. Which will need to develop from “an internal service provider in charge of technology” to a “trusted ally” which operates on the basis of influence rather than on the basis of control. Also because – as we mentioned in our symposium keynote – influence is much more scalable than control. And scalability is becoming increasingly important. The fact that enterprises will own a minority of DC technology investments and that CIO’s will control less than half of their technology spend, does not mean that enterprise technology budgets will become smaller in an absolute sense. It simply means that the new segments will grow much faster than the existing segments. You could compare it with the mainframe market, which did not disappear with the arrival of Unix or later with the advent of PCs, but relatively speaking it became a smaller part of the total.

Which brings us to another aspect of scalability, namely the scalability of consuming all that technology. We will in the future not spend more time sitting behind (or carrying mobile) screens than many of us already do today. Consumption of all these technologies will happen differently. Not through applications, or even apps, but much more proactive and at the same time much more behind the scenes, through direct advisory or even direct execution where necessary. This means that over time Google Now may become more important to Google than Google Search, Cortana may become more important to Microsoft than Windows and Siri more important to Apple than the iPhone. And yes, these are all cloud services.

Cloud services that will increasingly advise and even take decisions on behalf of their users (“I see you are flying to London this afternoon, you better get ASAP into your (self-driving) car, I checked you in on seat 4c. And do put your privacy filter on your laptop as diagonally behind you – on chair 5d – is the VP sales of our main competitor”). Besides these kind of personal – and rather privacy sensitive – services, cloud services will also increasingly be used to protect groups of us at the macro level. Let’s hope that we manage to preserve a somewhat appropriate balance between privacy and protection in the process.

Imagine (1971) is the most famous solo song by John Lennon and according to Broadcast Media one of the 100 most performed songs of the 20th century. One of the most talked about recent performances took place last month on a mobile piano in front of the Parisian Bataclan theater and partly formed the onset of the (as we now know successful) local campaign to have Imagine lead this years edition of the Dutch Top 2000. So we can conclude with the words of John Lennon: “I hope someday you will join us, and the world will live as one”.

Tune into the Cloud: Harbor Lights

Tune into: International data transfers
Travel broadens the mind, but data that travels too far can be stolen or snooped upon by foreign powers and that is something especially Europeans do not find amusing. This tune is about Safe Harbor rules. Or rather about the fact that these rules are no longer valid when it comes to governing international data transfers (see for more in-depth analysis see these earlier GBN blogs 1 and 2)

Now is discussing a news topic in a column that first publishes in traditional print (in Dutch) and only after that appears here, a bit risky from a news wordiness perspective. By the time the magazine is printed and distributed there might already have been a new set of safe harbor rules agreed with the US, making this essentially a non-topic. In hindsight we can however conclude that even traditional print presses still run significantly faster than the mills of European regulations and that no new rules have been passed yet.

Today the whole European cloud industry is anxiously awaiting new developments, especially as more rules mandating in-country (or even in-county) data residency could be the lifebuoy that the European cloud industry has been hoping for. Such regulation could significantly strengthen European Cloud providers when competing with the increasingly popular hyper scale cloud providers.

First a brief reminder on what is the issue. If a European organization’s is processing privacy-sensitive data in a country that offers less privacy protection than EU countries, then the organization must ensure that the data there is adequately protected. This could be achieved through safe harbor (based on self-certification) but also through the use of standard (approved) contract clauses or by implementing “binding corporate rules”.

The current issue at hand is that the European Court has determined – likely not completely unrelated to the revelations of Edward Snowden – that safe harbor does not offer real protection from foreign government snooping in such data. In theory the court has only declared safe harbor invalid, but it seems highly unlikely that the court would come to another conclusion regarding still valid administrative alternatives such as using standard clauses or corporate binding rules. And it would only take one of the 550 million European citizens to submit a similar complaint in order to get a similar ruling (but safe to assume this won’t happen during the time it takes to publish this column).

At this moment the European commission is doing its utmost to prevent individual member countries (or even worse, provinces / counties / states) to follow in the footsteps of Schleswig-Holstein and issue new individual regulations. Meanwhile many industry players are holding their breath regarding another lawsuit. In this case a dispute between Microsoft and the American justice department regarding the handing over of email data from its European data center in Ireland with regard to a criminal investigation about drug trafficking.

European providers of cloud services are jumping all over this. A German ERP vendor (no, not the one you think, but a local SMB’s focused player) started a mailing campaign in which they acknowledge ithis is a complicated issue, but that customers who want to play it safe are best advised to imply keep their data in Germany under supervision of a German service provider.

By the time you read this there are undoubtedly several other European service providers emitting similar messages while other European providers are leveraging the long term partnerships they developed with US-based SaaS and PaaS providers by taking the role of local data custodian for these global providers.

And what are (potential) European cloud users doing? They are consulting their lawyers and awaiting further guidance before taking action. A prudent approach but also an approach likely to slow down cloud adoption even further in Europe.

The slow starting track Harbor Lights – from the unforgettable album Silk Degrees (1976) – is number 10 on Boz Scaggs’  most-played list on Spotify.  The more farmiliar (and more up tempo) Lido Shuffle from the same album is listed as number one. Music travels easily than data, Boz lived several years in Europe before returning to the US (1968) to tour with the Steve Miller Band. More recently Boz toured with “blue-eyed soul performers’ Donald Fagen (Steely Dan, The Night Fly) and Michael McDonnald (Doobie Brothers).

Tune into the Cloud: Simple Plan

Tune into: Cloud Tactics

The phrase that whoever “Fails to Plan” actually “Plans to Fail” also applies to cloud computing. Many organisations are therefor putting together a cloud strategy. The question is whether the establishment of a company-wide cloud strategy is the right way. After all, the use of Cloud is not an end but a means, or as the IT manager of a large Dutch retail organisation recently succinctly put it: “For us cloud is not a strategy, but a useful set of tactics”.

Still, I get asked several times a week by various organisations to review their newly formulated cloud strategy. In practice many of these strategies read more like a cloud primer (What is cloud, what kinds of cloud out there, what are the potential benefits, etc.) than as a Cloud strategy. Not that there is anything wrong with that, but these documents rarely get to the unique or specific advantages the company aims to achieve by deploying cloud. In fact, it is actually doubtful whether it is useful to describe generic cloud benefits at the level of company business strategy. In many cases the added value of a pragmatic playbook (a.k.a. a simple plan), that describes pragmatically in what cases cloud computing should or should not be used, will be much higher.

We also still see too many organisations trying to formulate a cloud strategy before they actually gained any real life experience with cloud deployments. This resembles a person who wants to learn how to ride a bike, but who  – instead of borrowing an old bike and finding a quiet parking lot to practice  – starts with the reading a book and asking a consultant about cycling theory. In the past, with major projects such as the introduction of ERP, this may have been the only way to go. As it took years of time and millions of investment before you had something half workable and a representative test drive was simply not possible. The great advantage of the cloud is however that you can  – at low cost and without major investments – directly get started. The quality of any formulated strategy (but also of any simple plans) will increase almost linearly with the amount of experience that the organisation has actually gained with real cloud deployments.

Another best practice in this area is to create a short list of simple principles. For example: some years ago a major player in the energy sector formulated a simple  “cloud-first … unless” policy. Everything that could conceivably go to the cloud should indeed go there,  unless …..the system was on the companies list of mission critical systems. Now these mission critical lists in the energy sector are often the size of a phone book, so in practice the policy declared most systems off-limits for any use of cloud. However, this approach greatly helped the organization move forward. It prevented lengthy academic debates about whether the cloud was safe or reliable. The clarity that these principles gave, allowed the organisation too move much faster and gain a lot more cloud experience than any of their competitors, resulting in a tangeible competitive advantage now they are ready to deploy more broadly. Any such list of simple principles can (and should) of course be updated regularly based on actual gained experiences and insights.

Organizations that, instead of creating a formal cloud strategy, begin with a simple cloud playbook, often move faster and are able to articulate better what exactly they want to achieve. The phenomenon playbook, mostly known from American Football and defined by Merriam-Webster as “a stock or usual tactics and methods” can serve as a good model, even though in European sports we often feel we should leave more creativity and freedom to the individual players. Reality is that when it comes to cloud many an executive sees unlimited freedom not as a good idea, hence the common question for a formal strategy, but as stated, a  pragmatic playbook based on a short list of simple principles may often constitute a better approach.

With 53 million streams Simple Plan’s biggest hit on Spotify is the catchy 2011 song “Summer Paradise”  (My heart is sinking As I Am lifting up above the clouds, away from you”). A song inspired by the surf hobby of lead singer Piere Bouvier, who was  making plans while waiting for the next big wave.

Tune into the Cloud – Ghost Town

Tune into: Hybrid

While American Idol finalist Adam Lambert’s “My heart is a Ghost Town” is slowly moving down the international hit rankings, enterprises see something else slowly but surely turning into a ghost town. Namely their data centers.

For many IT managers, this is still a bit of a shock. Sure, most companies stopped a long time ago featuring their data center on a raised floor behind glass, right next to their reception area, but for many IT professionals a trip to the company data center still feels a little bit like coming home. I can still remember the disappointment of the IT operations manager of a financial institution located right in the heart of Amsterdam, when his brand new mainframe was delivered. That the new system only occupied barely half of the available floor space, was not what got him. It was the fact that the new million dollar plus system no longer came with enterprise-grade tokenring, but instead was fitted with literally the same Ethernet card as his just – in the context of an employee PC project – delivered home PC.

That type of consumerization (the use of technology originally designed for consumer use by enterprise organizations) is likely to continue for some time. The now increasingly common transition to SaaS is a good example. Consumers were the first to – instead of buying software which they subsequently had to install – simply access functionality via a browser (starting with Hotmail, Facebook, Linkedin, etc., etc.). Today the cloud roadmap of many companies consists largely of a long list of SaaS applications (Office365, Salesforce.com, Concur, Workday, ServiceNow, etc.). All of which – after implementation – free up some space in the company’s data center.

The impact of SaaS on enterprise data centers is arguably greater than the impact of migration of existing applications to IaaS platforms such as Amazon or Azure. In fact this type of lift and shift migration is still pretty rare. And to make things “worse”, we see less and less newly developed applications entering the data center. Mainly because more and more companies developing own applications are turning to using public cloud platforms such as AWS, Azure and Force.com for this.

And so the data center gets emptier and emptier, until the moment it get so small it is no longer economical to run it yourself. After which the remainder either is transferred to outsourcing or to a co-location facility (a third-party data center). The added advantage of the latter is that those colocation facilitates are “close to the cloud,” as most of these offer fast and direct connections to leading cloud providers. Enabling companies to – instead of using the “scary, open” Internet – use private connections with large bandwidth to connect their legacy systems with their new shiny cloud apps,.

This may make you wonder whether there is not a possible scenario where old and new harmoniously work together (a bit like multi-talent Adam Lambert did last year, by going on tour with the legendary rock band Queen)? Well of course there is. Proponents of this approach even coined a fancy term for it: Hybrid. Hybrid can be interpreted in many ways: a hybrid combination of cloud and non-cloud, as a combination of private and public cloud or more pragmatic as any combination of a bit of this and a bit of that. If we look at cars we have discovered that hybrid vehicles, such as the here highly subsidized plug-in hybrids, often do not deliver on the efficiencies they promise (at the time of writing this purely referred to driver behavior, many of whom refuse to load the battery and instead drive around on fossil fuels all day, every day) while in the IT space the whole idea of ​​Hybrid is still largely a (yet to be proven) promise.

For true hybrid success the old and new will have to cooperate really closely, and just like established artists took a good number of years to accept the amateurs winning idols as true musical peers, we see traditional IT vendors struggle with the reality of a new generation of competitors coming from the consumer space. A new generation who – by the way – are more than happy to point out that any hybrid scenarios are best to be seen as a transitional strategy to a full cloud world.

Ghost Town (2015) is a track of Adam Lambert’s third album, “The Original High”. His second place in the 8th edition of American Idol (thanks to his rendition of Bohemian Rhapsody) was followed by several hits songs, a place as judge and mentor on American Idol 13 and 14 and of course the unique collaboration with Queen, where Lambert followed in the vocal footsteps  of music legend Freddy Mercury.

Tune into the Cloud – Losing My Religion

Tune into: a cloud mindset

One of the tenets of the cloud religion is that it should be possible – through the use of intelligent software – to build reliable systems on top of unreliable hardware. Just like you can build reliable and affordable storage systems using RAID (Redundant Arrays of Inexpensive Disks). One of the largest cloud providers even evangelizes to its application development customers that they should assume that “everything that can go wrong, will go wrong”.  In fact their SLA only kicks in after a minimum of two zones becomes unavailable. Quite a surprising but none the less a typical cloud approach.

Nowadays most of the large cloud providers buy very reliable hardware. When running several hundred thousands of servers a failure rate of 1 PPM versus 2 PPM (parts per million) makes quite a difference. And using too cheap memory chips can cause a lot of very difficult to pinpoint problems. These providers also increase the up-time by buying simpler (purpose-optimized) equipment and by thinking carefully about what exactly is important for reliability. For example: one of the big providers routinely removes the overload protection from its transformers. They prefer that occasionally a transformer costing a few thousand dollars breaks down, to regularly having whole isles loose power because a transformer manufacturer was worried about possible warranty claims. And not to worry, they do not remove the fire safety breakers.

With that we are not implying that the idea of assuring ​​reliability at higher stack levels than hardware is no longer necessary. Sometimes even the best quality hardware can (and will) fail. Not to mention human errors (Oops, wrong plug!) that still on a regular basis take complete data centers out of the air (or rather, out of the cloud).

The real question continues to be what happens to your application when something like this happens. Does it simply remain operational, does it gracefully decline to a slightly simpler, slightly slower but still usable version of itself, or does it just crash and burn? And for how long? For end users bringing their own applications to the cloud it is clear where the responsibility lies for addressing this (with themselves). But end users who outsource their applications to a so called “managed cloud provider” may (and should) expect that the provider who provides that management takes responsibility. Recently several customers of a reputable IT provider – who earned his stripes largely in the pre-cloud era and who now offers cloud services from a large number of regionally distributed DCs – lost access to their applications for several days because one operator in one data center did something fairly stupid with just one plug.

Luckily we do see the rate of such human mistakes decline as cloud providers gain more experience (and add more process). Experience counts, especially in the cloud. But an outage like this simply is not acceptable. If a provider boasts it has more local cloud data centers than others, but then is unable to move specific customer workloads to those other data centers within an acceptable timeframe, it is not really a “managed” cloud provider. Simply lifting and shifting customer applications to a cloud instance  without “pessimistically” looking at what could go wrong, is as stupid as putting all your data on a single inexpensive disk without RAID and without backup. And if reengineering the applications is too expensive to create a feasible cloud business case, then users should ask themselves whether cloud in that case is really the right solution.

In the words of R.E.M.: “I think I thought I saw you try” is really not enough assurance for success. The cloud is not about technology or hardware, it’s about mindset. And providers who do not change their mindset may see their customers loosing faint in the cloud (or at least in their cloud). Quit quickly.

Losing My Religion (1991), was the biggest commercial hit of alternative rock band R.E.M.. The song was written more or less accidentally as the bandleader was trying to teach himself to play a second hand Banjo he bought on sale.

Tune Into the Cloud: The Story So Far

In 2014 I started publishing my series of “Tune into the Cloud” columns on the Gartner Blog Network, below some of the highlights so far:

Something Got Me Started – Tune into: the need for speed

  • On how time to market – enabled by ever more productive cloud platforms – is ruling the battle for cloud dominance, and how virtual servers are increasingly becoming table stakes when it comes to cloud competition.
    GBN – TitC In – Sptf

Thinking out (c)loud – Tune into: Cloud Migration?

  • On how  there may be considerable less love for “lift and shift” cloud migration than many in the industry may think.
    GBN – TitC In – Sptf

Price Tag – Tune into: cloud margins.

  • On how  margins in the cloud are not necessarily lower, but significantly different and why traditional vendors are having such a hard time coping with this.
    GBN – TitC In – Sptf

Cloud.forsale – Tune into: Cloud market Consolidation

  • On how the cloud market is transitioning to the next phase, pretty much exactly as visonairy Geoffrey Moore described it several decades ago in “Crossing the Chasm”
    GBN – TitC In – Sptf

Atlantic Crossing – Tune into: Building specific solutions, not generic technology

  • On how prior to (and arguably also during and after) market consolidation,  building specific solutions that address specific problems a is the only way to gain a viable share of the market.
    GBN – TitC In – Sptf

Hello World – Tune into: Coding

  • On how the next generation of workers to have any chance of a form of sustainable employment needs to move from “operating” to “instructing” computers
    GBN – TitC In – Sptf

God is a DJ – Tune into: Cloud Services Brokering

  • On how IT departments and IT Vendors moving towards the cloud will need master making a living brokering others peoples IP, just like today’s generation of rockstar DJs
    GBN – TitC In – Sptf

Dock of the Bay – Tune into: Containers

  • On how containers are claiming their palce in the cloud landscape and what old and new problems they are addressing
    GBN – TitC In – Sptf

The Billionaire Boys Club – Tune into: Go To Market Success

  • On how it is hard to predict a hit, both in the music and the cloud industry, but how a lean startup approach can help.
    GBN – TitC In – Sptf

Locked out of Heaven – Tune Into: Cloud Lock-out

  • On how the risk of vendor fall-out and contract breach in the cloud can be much more immediate then in traditional IT
    GBN – TitC In – Sptf

Blow – Tune into: Putting customers first

  • On how not all cloud offerings (nor their vendors) are created equal when it comes to (contract) flexibility and elasticity
    GBN – TitC In – Sptf

Alors On Dance – Tune Into: Collaborative Development

  • On how smart cloud providers start with simple but rock solid functions that then are rapidly turned into new value added services
    GBN – TitC In – Sptf

And then there were Three… – Tune into: Software Defined Networking

  • On how Software Defined Networking makes the network the third programmatically controllable resource 0 next to compute and storage – in cloud infrastructure
    GBN – TitC – In – Sptf

More to come in upcoming months all to be published first at the GBN (Gartner Blog Network) and rapidly after here:


The official Tune into the Cloud Playlist

Tune into the Cloud: Something Got Me Started

Tune into: the need for speed

By now it is widely acknowledged that cloud enables a fast (agile) start. But more important than a fast start is getting results quickly. We are talking then about high productivity platforms, a category of PaaS. Funnily enough, several cloud providers – such as Microsoft and Google – launched a PaaS platform first and only later – when they saw how quickly the virtual machine based IaaS services from Amazon were becoming popular – technically did a step back to launch a lower level IaaS service.

Achieving cloudiness aspects such as scalability, elasticity, pay for use and service orientation is however quit difficult if your building blocks consist of merely virtual machines and load balancers. Netflix is ​​an example of a company that has developed this capability to an art form, but this was by no means easy to achieve (In fact Netflix has made many of the tools it developed for this available as open source offerings, so others do not have to reinvent the wheel, but even when using those tools this remains a complex endeavor).

Meanwhile we see most leading and some innovative cloud providers shift gears. Instead of asking customers to first develop their functionality in virtual machines and subsequently invest significant effort into making sure they have the right number of these machines running at any time, providers increasingly let their customers define functionality once, after which they as provider take care of managing the execution.

Good examples of this are Amazon’s recent Lambda service and Joyent’s already longer available Manta service. Here the user can define (for example in Javascript) what needs to happen when an object is saved or opened. For example the user can specify that each time a new photo is saved, automatically a thumbnail and a list of search terms should be created. The developer does not have to ask himself whether users will be storing 100 or 100 million photos per hour and whether he needs to have 1 or 1000 VMs in the air at particular moment to support that. That has now become the concern of the cloud provider.

In this architecture, the user simply defines micro services and specifies what events trigger a need for these services to be carried out. The rest (the execution) is on the provider. This demands a significantly different way of designing and developing applications. Some of us may remember this from the time of Visual Basic and Oracle Forms, where we directly linked functions to events (fields and buttons) in the user interface. In the cloud the events triggering these actions can be just about anything (an object being stored or opened, an IoT sensor firing, a log entry occurring, a new user joining, etc.).

Look ma, no servers!
With this approach, users can quickly create largely ‘serverless’, event-driven applications to (eventually) run at cloud scale. And for companies that want to go even faster, there are über-platforms – such as Omnifone and Twillio – emerging. These often run on top of the large IaaS providers and are focused on accelerating the increasingly important ‘time to market’ aspect in their area. Areas such creating music applications (a la Spotify, the European music streaming app) or interactive social communication applications (a la Uber, the much talked about taxi phenomena).

They enable this by not just providing appropriate technology (such as music compression and delivery) but by also making relevant content readily available. Omnifone for example comes with predefined contracts with most music publishers and Twillio supports – out of the box – a large number of mobile telco providers across many countries. Taxi phenomena Uber is a good example of how much difference time-to-market can make for mind- and market-share and – not unimportant – for stock evaluation.Traditional cloud infrastructure providers who still think that they – by adding yet another VMs size or by switching to flash storage – can compete with these increasingly time-to-market acceleration focused cloud platforms, are in for a rude awakening.

The flip side of using these higher productivity platforms is however almost always significantly higher degrees of vendor lock-in. Customers must therefore ask themselves whether they are willing to give up a bit of their freedom in exchange for shorter time-to-market and higher productivity, just like Simply Red sung in “Something Got Me Started”:  “I’ll give it all up for you? (3x) Yes, I would!”.

With “Something Got Me Started” (1991) Simply Red – until then mostly known from the balads of lead singer Mick Hucknall – showed that they indeed could put it up a notch or two in terms of speed. Although the song – other than Simpy Reds earlier hits- did not make it into the chart top 10 , it did rule the club and dance charts for many months in a row.

Tune Into the Cloud: Thinking out (c)loud

Tune into: Cloud Migration?

Unlike in the current hit song “Thinking Out Loud” – where singer-songwriter Ed Sheeran declares his love to last to well into his seventies – it remains to be seen how long the love will last when it comes to “Thinking of Cloud”. IT loves are often short-lived. In fact so short, we call them hypes. At Gartner we even created the hype cycle, a curve that shows how after a first spark (the technology trigger) many new infatuations quickly reach the peak of unreasonable expectations followed by a period of considerable disillusionment. In that hype cycle cloud computing is now at the beginning of the plateau of productivity (or as Sheeran would say more poetically: the beginning of lasting love). So cloud indeed seems to be a keeper. Which makes it an exception in IT land, many hypes never reach this plateau and leave by the side door, unlovingly labelled as “obsolute before productivity”.

A platform that also proved to be a keeper and that is still enjoying a large loyal following – of which many indeed are approaching their seventies – is the mainframe. And although I know several mainframe veterans who claim that what they once built on their 360 ​​architecture must have been one of the first true incarnations of cloud thinking, mainframe may seem out of place in a blog dedicated to cloud. But the transition from mainframe to what came next – the generation of distributed, RISK-based or open systems – may very well prove a model for how current workloads may migrate (or maybe not migrate) to the cloud.

The mainframe namely – despite the success of the new distributed generation – never disappeared. In fact the revenues of the sector are now bigger than in the glory days of the 360 ​​and 370 architectures. The next generation did not grow big by the migration of existing application code to the new platform. The growth of open systems was fuelled by a generation of new applications – such as SAP R3 – that were simply only available or feasible on the new type of infrastructure. Not that there were no brave attempts at “lift and shift” migrations in those days. We even had a name for it: “downsizing” or in more politically correct terms “rightsizing”. Many of these once migrated applications have since moved back to the platform for which they were originally designed (because they simply fitted better there) or replaced all together by new applications build specifically for the new architecture. Another major driver (maybe even the biggest driver) in this process was that these new applications were now almost always standard packages, and no longer custom applications developed specifically for the company using them.

If we take history as a guide for the upcoming transition to cloud, then we should expect cloud growth to be fuelled more by the uptake of a new generation of specially for the cloud-developed applications than by migration of legacy workloads (which indeed is what we have clearly seen in the cloud story so far). And these new applications will again be a different kind of applications. Namely multi-tennant – or rather multi-enterprise – SaaS applications. Multi-tenant applications share the underlying infrastructure and middleware across multiple clients. Multi-enterprise applications in addition share the data (content) and often the business processes across multiple clients. Think of examples like Google maps and LinkedIn. If Google maps only shared computing capacity and application code, but every user had to bring his own maps (including photos of every street corner), or Linked-In provided only its algorithms but every company still had the enter the resume’s of each candidate and employee, these cloud services would not nearly be as useful or as popular as they are today. And we see the same with B2B applications offering shared purchasing, shared distribution planning.

Personally, I expect a lot more cloud growth from net new projects than from “lift and shift” cloud migrations. Although there will undoubtedly be some valiant efforts and many a service provider will earn a living wage accommodating these projects. But a few decades from now we wont remember the cloud because of these migration projects, our lasting cloud love will be based on the next generation of applications that came with it.

Singer songwriter Ed Sheeran achieved overnight succes and 6 times platinum with his first album in his native Britain. Meanwhile, he has conquered the world by creating new material for global acts such as One Direction and Taylor Swift and  by performing himself his own songs such as “Thinking Out Loud”.

Tune into the Cloud: Price Tag

Tune into: cloud margins.

One of the most famous quotes of Amazon CEO Jeff Bezos is “Your Margin is My Opportunity”. This illustrates nicely how in the world of Amazon (and in the words of UK’s pop idol Jessie J*): “It’s not about the money³, we do not need your money³, we just wanna make the world dance!”

In the cloud this – a bit unworldly – attitude is fairly normal. Think of Facebook buying WhatsApp for about $ 16 billion, only to make it available for free to large parts of the world via Internet.org. For more traditional competitors this kind of “semi-philanthropic” way of doing business takes some time getting used to.

Indeed, companies like IBM have been working diligently to improve their profit margins, quarter after quarter and year after year. Divisions that could not make the cut towards achieving double digit margins were divested, one might say without mercy. At IBM we saw this with first the divestment of the PC division and not much later the x86 server division to Lenovo, while internally more and more emphasis was put on the much more lucrative software area.

Although SaaS at first glance seems very similar to software, profit margins can be substantially lower. With traditional software – after recouping the initial development costs – any additional sales goes almost directly to the bottom line. Almost, as the cost of delivery (formerly 10 cents for a CD or now 1 cent per download) and of course the cost of sales still have to be subtracted. Cost of sales however typically do not get measured in cents, as many software companies employ senior sales staff (on equally senior compensation plans) or sell through partners, who need and expect significant sales margins. And then we did not even take into account any cost of marketing yet.

On the other hand, the significant costs customers typically incurred after buying software (acquisition of hardware, middleware, paying for implementation and installation services and off course ongoing operations cost and support fees) all came from the customers budget. These were not included in the license fees of the software vendor and thus did not impact its margins. With SaaS it’s a different story.

When a SaaS vendor gets successful and sells thousands of user subscribtions, he must arrange compute, storage, network and (hopefully also some) backup capacity for all those users from his own pocket. The margins may still be better than on the typical sale of  hardware (with Apple as the significant industry exception), but the cost of all this can quickly add up. And – perhaps contrary to what many believe –  SaaS does not sell itself. SaaS requires comparable sales and marketing effort. So the move to SaaS won’t really help traditional suppliers to preserve their margins, let alone increasing them.

Meanwhile the margins of traditional outsourcers and hosting providers are also under increasing pressure from the cloud. In traditional outsourcing and hosting deals the cost for hardware, licensing (middleware, databases, management tools, etc.) and for manpower, typically each accounted for about 30%, leaving a reasonable but not enormous theoretical margin of 10%. But cloud providers have a very different cost structure and are not averse to adjusting their prices accordingly. First, cloud providers define service almost exclusively as self-service, which saves them a significant amount of manpower. And also the delivery of anything requested through these self-service interfaces is largely automated (a one time development investment of which the cost quickly approaches zero at hyperscale). And also paying significant fees for commercial licenses and management tools is not in the vocabulary of most pure-play cloud providers. They prefer to use open source, or if that happens not (yet) to be available for a certain area, they are perfectly happy to develop it themselves. Finally, also in terms of hardware, it is a different story. Hyperscale providers are typically eligible for hyperscale- discounts. And more importantly – thanks to their scale – they can drive their hardware utilization levels to unprecedented heights.

By that time the theoretical 10% margin of traditional service providers has already largely evaporated, even if these hyperscale cloud provider maintain a fair – but not huge – margin on their offerings.

* “Price Tag” (2011) is a number of British singer-songwriter-performer Jessie J, who millions saw perform live during the closing ceremony of the Olympic Games in London, which coincided with the first official reunion of the Spice Girls, a group which – unlike today’s artists – could still live from the sale of CDs.

PS This column published in Dutch at Cloudworks.nu on April 16th, a week prior to the Amazon’s earnings announcement in which it for the first time broke out Amazon Web Services revenue, operating margins and investments.

Tune into the Cloud: Cloud.forSale

Tune into market consolidation

Last time I looked it was still there. The domain name: cloud.forsale. And more or less against my better judgment, I contemplated putting a bid in. Cloud for Sale namely makes me think of the somewhat melancholic House for Sale (Lucifer, 1975) about two people who started in good spirits towards their new future, but then – when it did not turn out as expected – put out a for sale sign. And that is likely what we will soon see in the cloud market too. Providers that despite their best intentions and putting in a lot of energy and innovation, wont be able to make it and therefore will put their estates up for sale.

It’s not like we could not have seen this coming. In 1991, Geofrey Moore already described in his inimitable “Crossing the Chasm” various phases that vendors of new technological innovations typically go through. When reading it today you would almost think he wrote it specifically for the cloud IaaS market. After bridging the chasm between early adopters and the early majority by offering specific, vertical solutions (see my earlier column “Atlantic Crossing), the market begins to realize that these are not a bunch of cool new point solutions but the dawn of a new way of doing things.

At that moment the market dynamics change completely. For the potential winners (Gorillas as Moore called them) the priority becomes only one thing, enabling massive growth. They typically do this by adding lots of sales people (often recruited from more traditional, now struggling competitors), by rapidly creating a ecosystem of partners and by ensuring their existing production systems keep running at full throttle while they are adding new capacity as fast as possible.

Employees of the gorillas of earlier markets booms have – besides a burnout or ulcer – in most cases some nice perks – such as a classic old-timer or a second home in an exotic location – to show for their outrages work effort and work ethics during these periods of extreme growth. But for the other market players, it is often the harsh end of an illusion. Some may squeeze some extra millage out of their offering by making it “plug compatible” with the market leader. Others reposition themselves as a niche provider (sometimes transferring their niche solutions onto the now leading infrastructure products of their former opponents) or they go through life as implementation, consulting or brokering partner of the winning product lines.

The signs of this coming soon are currently written all over the wall of the cloud market. It started with small providers being taken over by somewhat larger but also not overly successful providers (successful providers are too busy meeting their booming demand to take time for acquisitions). We also see in large organizations an increasingly rapid succession of a amazingly wide variety of leaders for the cloud initiative. While at the successful providers the same people remain in their roles, often for decades, becoming more and more successful as their product continues to grow and gain share.

House for Sale (without the dot as domain names did not exist yet in 1970) was the first song and also the first international hit of Lucifer, a Dutch formation with lead singer Margriet Eshuis and drummer , later TV Host, Hennie Huisman.