Audits and Certificates Won’t Erase Cloud Security Concerns

In every cloud survey, security consistently comes out as an inhibitor to cloud adoption. Even though this has been the case for several years, many feel that it is a temporary barrier which will be resolved once cloud offerings get more secure, mature, certified, and thus accepted. But is this indeed the case or do we need another approach to overcome this barrier?

CCL image courtesy of Auntie P - http://www.flickr.com/photos/auntiep/349806405/sizes/s/in/photostream/During a recent cloud event, two speakers from a large accounting and EDP auditing firm took the stage to discuss the risks of cloud computing. While one speaker dissected the risks for both consumers and providers of cloud services, the second speaker discussed the various certifications and audit schemes that are available in each area. They acknowledged that with the currently available certifications, not all risks were covered, but their envisioned remedy was even more comprehensive certifications and audits. Now, this may come as no surprise given the speakers’ backgrounds, but more “paperwork” simply won’t address what IT pros are really worried about. Let me try and explain my thinking, including how the recent WikiLeaks events influenced this.

Security is often cited as a concern with regard to cloud adoption. My view is that the apprehensions are more the fear of losing control (not being able to restore service when needed), not primarily the fear of losing data. Fear of losing data can be addressed by cloud providers through implementing security solutions as described in various posts on the CA security management blog, but fear of losing control cannot.

The big difference between traditional IT and cloud computing is that cloud computing is delivered “as a service.” With traditional IT we bought a computer and some software. In case it did not work we could fix it ourselves (sometimes a firm kick would suffice). No matter what happened (good or bad), we were the master of our own destiny. And even with traditional outsourcing, we often told the outsourcer “what to do,” and in many cases “how to do it.” If push came to shove and the outsourcer really screwed up, we could — at least in theory — still say “Move over, let me do it myself.”

When something is delivered as a service, there is no equipment to kick and we no longer can say “Move over, I’ll do it myself.” We likely won’t even be allowed to enter the room where the equipment is located or get access to the underlying code and data. If your biggest customer (or your boss, or the boss of your boss) is on the phone screaming at you, that is not a position many people want to find themselves in. And believe me, showing all the certificates and audit reports that your vendor accumulated and shared with you, will not quiet them down, even assuming that the vendor at that moment is doing its best to fix the problem. But what if the vendor has made a conscious decision to discontinue rendering the service – as seems to be the case with WikiLeaks?

Now you may feel your organization would never do something that would warrant or even cause such behavior by your vendor. But what if a judge ordered your vendor to discontinue the service? Something that can happen and has happened, sometimes because of really small legal technicalities or unintended incidents like a server sending spam or an employee collecting illegal content on a company server. Google and other mail providers have been ordered to cease mail services to both consumers and business, and have complied. Sure you can go to court and appeal, but will that be quick enough?

For each “as a service” service we will need to evaluate what is reasonable risk and what to do to remedy the unreasonable risks. What is reasonable will very much depend on the type of industry. In the following examples we look at scenarios of the service not working (outage), and the data being stolen. Some incidents the business may hardly notice, others can be severely inconvenient, but others could jeopardize overall business continuity (not being able to invoice or missing a deadline on a project with severe penalty clauses).

  • Email: If email is down but phones, instant messaging, text messaging and maybe the occasional fax are still available, then a few days outage may be reasonable (for some companies). Provided we get all of our email back at the end of the outage, regardless of whether we moved to a new provider or the old one finally got it fixed or switched us on again. With regard to theft: nobody likes their personal conversations discussed in public (see again the WikiLeaks example) so measures like encryption, digital signing, using SSL and working with reputable (OK, let’s call them certified) vendors are in order.
  • CRM: This system tells us what our sales team has been up to. Before we implemented CRM (fairly recent in many cases) we had limited insight into sales activities, so it seems reasonable that a week of outage is fine (again, depends on your industry). With regard to theft, these are often records about people, so legal and privacy requirements apply, not to mention that you may not want this data to show up at your direct competitor.
  • Invoicing, order intake, reservation management: Very much depends on the industry, but for some industries a single hour of outage at the wrong moment can already mean bankruptcy. In this case, you probably want to have a hot swappable system, preferably at two different “as service” vendors.
  • Project management: Depends; if you are a system integrator with penalty clauses or an innovator rushing towards a product launch, it may be critical.
  • Bookkeeping: Depends (before end of month closing?).

I could go on and on, but I’m sure you get the point. For each service that you would consider moving into the cloud, you have to determine the importance, criticality and impact of disruptions (I am sure you do this all the time for all your services ;-)). This exercise may actually save you lots of money. Most services are not under-provisioned but over-provisioned. In case of doubt, IT tends to move services to the more secure, more reliable, more failover equipped platform. A famous example is the company that was running its internal employee entertainment Tour de France betting system on a hot swappable dual everything nonstop system.

Next, for each service you must determine what a reasonable recovery period is, and how to implement it. It could be simple source code escrow (with the right to keep using the code) and a failover contract with a nearby infrastructure provider. Or it may require having a fully up-to-date system image ready to provision within an hour. For other scenarios, you may be running two instances of your service or application, in parallel at two separate service providers on different grids, different networks and in different jurisdictions. And for some you may not bother. It’s like insurance: most people insure their house against fire (as they could not overcome the financial impact if it burned down) but many do not insure their phones or cars against theft of damage (as they can afford to buy a new one if needed without going bankrupt, even though it may be “severely inconvenient”). There is also a case of being too cautious.  I remember at my first employer, the bookkeeping department of the local plant would travel separately to the annual company outing (two by train and two by car), even though we had 12 factories located within a hundred miles, each with four bookkeepers. I am sure we would have closed the books somehow in case of a travel mishap.

Hopefully most of the services currently running in the cloud (CRM comes to mind) fall into the “severely inconvenient” category. If they are business critical, you hope the companies have a plan B that allows them to move these jobs quickly to another cloud if the need arises.  To be able to do so easily, we will need two things: Standards that enable more portability than we have today, and automation tools that allow us to do this “semi-auto-magically.” Our accountant friends may claim you also need certifications on both the primary and the backup vendors, but I am sure these will remain in the desk drawer when push comes to shove.

A final thought on assuring your services in the cloud.  On the insurance front we see that many people do not insure their house against natural events such as earthquakes, first because it is often not possible or affordable, but also because — as my father used to say — “if heaven drops down, we will all be wearing a blue hat.” Imagine if a video on-demand provider is the only one still running after an earthquake, how much good would it do them? In other words, it is all about being pragmatic.

P.S.  During my economics study, at some point you had to decide whether to major in accounting or in IT. Guess what the more pragmatically inclined folks chose? 😉

Cloud Predictions Beyond 2011 – 2: The need for a cloud abstraction model

If the cloud is to fulfill on its promise we need to start thinking of it as a cloud, not as an aggregation of its components (such as VMs etc.)

As mentioned in a previous post I‘ll use some of my upcoming posts to highlight some cloud computing “megatrends” that I believe are happening – or need to happen – beyond 2011. One of these would be the creation of an “abstraction model” that can be used to think about (and eventually manage) the cloud.  A nice setup to this was done by Jen-Pierre Garbani of Forrester, who in a recent post at Computerworld UK talks about the need to Consider the Cloud as a solution not a problem.

 

In this is he uses the example of the T-ford -which was originally designed to use the exact same axle with as roman horse carriages, until someone come up with the idea of paving the roads – to argue that customers should not “design cloud use around the current organization, but redesign the IT organization around the use of cloud  .. The fundamental question of the next five years is not the cloud per se but the proliferation of services made possible by the reduced cost of technologies”.

I

 could not agree more, it is about the goal not about the means. But people keep thinking in terms of what they already know. It was Henry Ford who ones said “If I had asked people what they wanted, they would have said faster horses.” Likewise people think of clouds and especially of Infrastructure as a Service (IaaS) in terms of virtual machines.  It is time to move beyond that and think of what the machines are used for (applications/services) and start managing them at that level.

Just like we do not manage computers by focusing on the chips or the transistors inside, we should not manage clouds by focusing on the VM’s inside. We need a model that abstracts from this, just like Object Orientated models abstract programmers from having to know how underlying functions are implemented we need a cloud model that abstracts IT departments from having to know on which VM specific functions are running and from having to worry about moving them.

In that context Phil Wainwright also wrote an interesting post: This global super computer the cloud, a post that originated 10 years ago. First, it is amazing that the original article is still on-line after 10 years – imagine what it would take to do that in a pre-cloud era. Second, the idea of thinking of the cloud as a giant entity makes sense but I disagree with him when he quotes Paul Buchheit’s statement on the cloud OS: “One way of understanding this new architecture is to view the entire Internet as a single computer. This computer is a massively distributed system with billions of processors, billions of displays, exabytes of storage, and it’s spread across the entire planet”  That is the equivalent of thinking of your laptop as a massive collection of chips and transistors, or of a program you developed as a massive collection of assembler put, gets and goto statements.

To use a new platform we need to think of it as just that, as a platform, not what it is made off. If you try to explain how electrons flow through semiconductors to explain how computers work, nobody (well almost nobody) will understand. That is why we need abstractions.

Abstractions often come in the form of models, like the client/server model or (talking about abstraction) the object oriented model or even the SQL model (abstracts from what goes on inside the database).  

Unfortunately the current cloud does not have such a  model yet – at least not one we all agree on. That is why everyone is trying so hard to slap old models onto it and see whether they stick. For example for IaaS (infrastructure as a Service) most are trying to use models of (virtual) machines that are somehow interconnected, which makes everything overly complex and cumbersome.

What we need is a model that describes the new platform, without falling into the trap of describing the underlying components (describing a laptop by listing transistors). The model most likely will be service oriented and should be implementation agnostic (REST or SOAP, Amazon or P2P, iOS or Android, Flash or HTML5). Let’s have a look what was written 10 years ago that we could use for this, my bet would be on some of the Object Oriented models out there.



PS Feel free to follow me on Twitter @GregorPetri and read this blog at blog.gregorpetri.com

Cloud Predictions Beyond 2011 – Part 1: Consumer Services Rule

In the past weeks we launched directly from the season of cloud events into what SysCon calls the Annual Predictions Bonanza. Gartner released its predictions on December 1 leading with “critical
infrastructure will be disrupted by online sabotage
.”  At CIO magazine Bernard Golden gave two
interesting points of view
, one for vendors and one for users, and even CA Technologies offered insights into the changes we expect in 2011, including how “security will shift from being perceived
as a cloud inhibitor to becoming a cloud enabler
.”

So, what happens after 2011?  In a few upcoming blogs I will highlight some “megatrends” that I believe are happening – or need to happen – in the decade about to start. (Now, you may argue that the decade started a year ago, but starting to count at zero is very “old school IT” and “old school IT” is definitely not what we are going to see going forward.)
BIG IT becomes Consumer IT
Traditionally “BIG IT” represented the IT operations of large banks, governments and Fortune 1000 companies. These organizations were typically the first to implement new technologies, ranging from the first mainframes to powerful UNIX clusters and later rack-based systems. Many technology companies used the 80/20 rule — that the top 20% of companies were responsible for 80% of
the overall Global IT spend — to guide their strategy.  Today the total data processing at the
average stock exchange still dwarfs the number of transactions a phenomenon like Twitter handles
, but online entertainment is rapidly catching up.

This really hit home while visiting a large hosted European data center a few weeks ago. There were some corners where you could still find enterprise servers zooming away, but the really big server farms and all the reserved open spots were dedicated to consumer-related services such as online gaming, mobile internet and messaging, and on-demand television. The rise of of these consumer services will cause unprecedented demands for cloud storage, cloud networking and cloud processing in 2011, but the average enterprise IT manager won’t particularly notice. In fact, many traditional IT chiefs may still feel they are “BIG IT”.  If you’re interested in an analyst covering these new consumer areas then you may enjoy Om Malik’s GigaOM site.

You could say that this trend of data centers becoming more and more consumer-centric is the top- down part of IT consumerization. The bottom-up part is employees bringing their consumer technology (iPhones, iPads, etc.) and expecting to use them while doing their job. The long term impact of this top-down trend will be that traditional BIG IT technology vendors will start to focus their R&D more on new, fast growing markets. Vendors with a running start in this new reality will be consumer electronics companies (like Apple) and technology vendors that grew up – or grew big – with the internet. As a result Enterprise IT will become a secondary market, a market where data center inventions and investments that were originally made for the consumer and entertainment market can be redeployed. Something to take into consideration when picking your strategic technologies and vendors for the next decade. 

Now consumer IT won’t take over Enterprise IT completely during 2011, but the days that we made fun of hardware vendors that made more money on consumer printers and ink than on enterprise data centers are definitely behind us.

P.S. — OK just one prediction for 2011.  In one of my earlier blogs I wrote about the four P’s of Innovation – Problem, Ponder, Publish and Pilot. For Enterprise IT, 2010 was clearly the year of publications (just look at the number of blogs with cloud predictions). That would make 2011 the year of piloting. Check back for my next blog on what I expect the production period will look like.

Virtual Strategy – Virtually Right

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations. But how do you get there?

This post was originaly published as thanksgiving weekend special at virtual-strategy.com.
In the article I discussed some approaches for building a dynamic data center that not only addresses complexity and reduces cost, but also accelerates business response time, to ensure that organization realizes the true promise of cloud computing, business agility and customer responsiveness.

Cloud computing presents an appealing model for offering and managing IT services through shared and often virtualized infrastructure. It’s great for new business start-ups who don’t want the risk of a large on-premise technology investment, or organizations who can’t easily predict what the future demand will be for their services. But for most of us with existing infrastructure and resources, the picture is very different. We want to capitalize on the benefits of the cloud ― on demand, low risk, affordable computing ― but we’ve spent years investing in rooms stacked high with hardware and software to run our daily mission critical jobs and services.

So how do organizations in this situation make the shift from straight-forward server consolidation to a dynamic, self-service virtualized data center? How do they reach the peak of standardized IT service delivery and agility that is in step with the needs of the business? Many virtualization deployments stall as organizations stop to deal with challenges like added complexity, staffing requirements, SLA management, or departmental politics. This “VM stall” tends to coincide with different stages in the virtualization maturity lifecycle, such as the transition from tier 2/3 server consolidation to mission-critical tier 1 applications, and from basic provisioning automation to a private/hybrid cloud approach.

The virtualization maturity lifecycle 
The simple answer is to take it step-by-step, learning as you go, building maturity at every step. This will earn you the skills, knowledge, and experience needed to progress from an entry-level virtualization project to a mature dynamic data center and private cloud strategy.

It’s called the virtualization maturity lifecycle, and it builds in four steps. Just like pilots start their training on small planes (going full cycle from take-off to landing) before they move onto large commercial jets, it is advisable for organizations to implement these virtualization maturity steps iteratively. For example, start a full maturity cycle on test and development servers before moving to mission critical servers and applications.
Start easy, by consolidating servers, to increase utilization and reduce your current carbon footprint. To ensure deep insight and continuity in support of the migration from physical to virtual, you might want to leverage image backup and physical-to-virtual restore tools that allow you to move your physical IBM, Dell and HP images directly to ready to run VM images for VMware, Sun, Citrix and Microsoft.

The next step involves optimizing the infrastructure. Apart from maintaining consistency, efficiency, and compliance across the virtual resources (which is proving fast to be even more complex in virtual than in physical environments), we analyze, monitor, (re-)distribute and tune our applications and services.
While optimizing, we also discover and document the rules we will automate in the next phase. Rules about which applications best fit together, what areas are suitable for self service and which type of services are most important. As you can imagine the answers to this last question will be very different for a nuclear plant (safety first) compared to an online video rental service (customers first), which it is why it is such an important step. If you skip this stage and go straight into automation, you’ll likely end up in the same situation that you’re in today, just automated.

A successful cloud strategy is all about agility and flexibility, and the next step in the virtualization maturity lifecycle helps take care of automation and the orchestration of your (now) virtual services. You can empower users to help themselves ― industrialize processes ― without calling IT for every service request. Automation has many advantages here. It is the catalyst to standardize your virtual infrastructure, integrate and orchestrate processes across IT silos, and accelerate the provisioning of virtual cloud services. Once the industrialized provisioning process is live, automation technologies can then also be used to monitor demand volumes, utilization levels and application response times and to assist root-cause analytics to help isolate and remediate virtual environment issues.

The final stage is the centerpiece of a cloud strategy, a position which allows you to manage the definition, demand, and deployment of IT services: the dynamic data center. Your now agile infrastructure, delivered from a secure, highly available data center, enables you to quickly respond to rapid business fluctuations. To reach a dynamic data center, you need to automate the entire process of service delivery from request to fulfilment. This includes centralized service requests, automating the approval process so that department heads can quickly approve or reject requests, a standard and repeatable provisioning process, and standard configurations.

This goes much further than the traditional dream of a “lights out” data center, which basically was a static conveyor belt-like factory where all labor was automated away. The dynamic data center is like a modern car factory, where robots perform almost all tasks, but in ever changing sequences and configurations, guided by supply-chain-lead orchestration.

The new normal  
As we all know, technology changes fast. This advancement in technology is creating a “new normal” where relationships with customers are increasingly in a digital form and technology is no longer an enabler or accelerator of the business― it has become the business.

This is a theme picked up by Peter Hinssen, one of Europe’s thought leaders on the impact of technology on our society. He evangelizes this new normal, arguing that in a digital world there will be new rules that define what is acceptable for IT, including zero tolerance for digital failure, an era of “good enough” functionality (60% functionality in six weeks rather than 90% in six months), and the need to move your architectures―including your new cloud architecture―from “built to last” to “designed to change”.
The lifecycle approach described earlier may be just what you need to help align your IT organization to what Hinssen calls the new normal. First you determine where opportunities exist for consolidation and rationalization across your physical and virtual environments ― assessing what you have in your data center environment and establish a baseline for making decisions that take you to the next stage. Next, to achieve agility, you have to automate the provisioning and de-provisioning of virtualized resources, including essential elements, such as identities, and other management policies such as access rights.

The next step in delivering an on-time, risk-free (zero failure) cloud computing strategy is service assurance. You need to manage IT service quality and delivery based on business impact and priority — top-to-bottom and end-to-end. That includes, for example, delivering a superior online end-user experience with low-overhead application performance management, and end-to-end visibility into traffic flows and device performance. The new normal also needs to be secure. IT security management technologies must be applied against current regulations and end-user needs, which enable the virtual layer to be more secure.
All these factors combined ultimately lead to agile IT service delivery. With agility, you can build and optimize scalable, reliable resources and entire applications quickly. By embarking on the virtualization maturity roadmap, you can move closer to a dynamic data center and successful cloud strategy.

Any shortcuts?
This evolutionary approach may sound very procedural (and safe). You may also be thinking, is this the only way? What if I need it now?  Is there no revolutionary approach to help me get straight to a private cloud much more quickly? Just like developing countries, which have skipped the wired POTS phone system and moved directly to a 100% wireless infrastructure, a revolutionary approach does exist. The secret lies in the fact that – in addition to the application itself – the infrastructure required to deploy an application can be virtualized – load balancers, firewalls, NAS gateways, monitoring tools, etc.  This entire entity – the application and the required infrastructure it needs to be successfully deployed – can then be managed as a single object. Want to deploy a copy of the application? Simply load the object and all of the associated virtual appliances are automatically loaded, networked, secured and made ready.  This is called an application-centric cloud.

With traditional virtualization, the servers are the parts that are virtualized, but afterward, these virtual servers, networks, routers, load balancers and more, still need to be managed and configured to work with the other parts of the data center, a task as complex and daunting as it was before. This is infrastructure-centric cloud.  With full application-centric clouds, the whole business service (with all its involved components) is virtualized becoming a virtual service (instead of a bunch of virtual servers) which reduces the complexity of managing these services significantly.

As a result, application-centric clouds can now model, configure, deploy and manage complex, composite applications as if they were a single object. This enables operators to use a visual model of an application and the required infrastructure, and to store that model in the integrated repository.  Users or customers can then pull that model out of the repository, reuse it and deploy it to any data center around the world with the click of a button.  Interestingly, users deploy these services to a private cloud, or to an MSP, depending on who happens to offer the best conditions at that moment.  Sound too futuristic?  Far from it.  Several innovative service providers, like DNS Europe, Radix Technologies, and ScaleUp, are already doing exactly this on a daily basis.

For many enterprises, governments and service provider organizations, the mission for IT today is no longer just about keeping the infrastructure running. It’s about the critical need to quickly create new services and revenue streams and improve the competitive position of their organization.
Some parts of your organization may not have time to evolve into a private cloud. For them, taking the revolutionary (or green field) approach may be best, while for other existing revenue streams, an evolutionary approach, ensuring investment protection, may be best.  In the end, customers will be able to choose the approach that best fits the task at hand, finding the right mix of both evolutionary and revolutionary to meet their individual needs.

Reshaping IT Management – by cutting it into two halves?

The McKinsey quarterly just published an interesting and very readable piece on “reshaping IT for turbulent times”.  In the article they analyze what seems to be a dichotomy for today’s IT management: How to balance running an efficient IT factory with being a responsive customer focused provider.
In the article (which is freely accessibly after registering) Roberts, Sarrazin and Sikes describe two models, an efficient factory approach and a more enabling, innovation oriented approach.  However, their suggested approach of applying two models, splitting the organization effectively into two separate parts — a mainstream factory and a boutique — seems less than optimal. This split very much resembles the traditional split of IT into development and operations, something that is also turning out to be less than optimal and too slow for today’s markets. Hence the emerging of a new IT discipline called DevOps. 
It is understandable they use two models as traditionally efficiency and innovation require different approaches. There is an old analogy that makes this very clear. Think of the organization as a sponge. If you want more efficiency, you centralize (squeeze the sponge and any excess water pours out); however, if you want innovation and new ideas, you need to let go of the sponge, creating room to suck up water – new ideas. Squeezing and letting go at the same moment seems impossible.

Addressing both efficient production and customer responsiveness at the same time seemed an insolvable issue in traditional manufacturing, as well. Until management innovations – such as just-in-time (JIT) supply chain optimization – gave management the tools needed to address this. The main difference between the new supply chain and the traditional manufacturing-oriented approach was that the goal shifted from efficient production to effective end-customer delivery. This leads to vastly different decisions when put into an optimization model. The IT equivalent of this JIT innovation is cloud computing.

Splitting the IT organization into a back-office grinder shop and front-office boutique will turn out to be a temporary solution at best. Not just because a dual-model approach – almost by definition – prevents any optimization across the two, but also because experience shows that in cases like these, the low cost grinding part will soon move to a low-cost provider (for example, manufacturing moving to China), after which pretty soon the innovation part is likely to follow (again look at what is starting to happen in China). Traditionally the best innovation labs are near factories, except maybe for fundamental research (which most commercial have lost interest in or can no longer afford). 
It took the manufacturing industry several decades (and fierce competitive pressure from pioneers such as Japan) to make the transition to be both efficient and responsive at the same time. IT can learn from these experiences. The competitive pressure required to make such a transition has already arrived. Cloud computing enables users to bypass IT completely and source solutions directly from outside service providers, a practice sometimes referred to as “Rogue IT.” In my post “On empowered users, rogue and shadow IT, stealth clouds and the future corporate IT I wrote on the valid need for IT to be closer to the business again, which in my view can be achieved without cutting it in two.

By taking an integrated approach – based on aforementioned IT supply chain thinking, with a large emphasis on sourcing – IT organizations will be able to both have their cake and eat it too.

The private cloud debate is building up steam, but is it worth having?

Slowly but steadily the debate in the blogosphere about private clouds is increasing. Now it is always good to see some debate, but is this a debate worth having? Will the cloud long term not be about other things than who owns a machine?

Under provocative titles like “Private cloud discredited, part 1”  and “Do We Really Need Private Clouds?”  the private cloud debate is building up steam. The first blog is actually called “part 1” because the author is sure there will be a part two, given the raging emotions and all the opinions being aired.
The second one is part of a very readable guest series by IT analyst avant la lettre Robin Bloor at Cloud Commons. Cloud Commons  is a cloud consumer rating service community site, like consumersearch.com and epinions.com but for cloud products and services, that CA Technologies helped initiate.
Now it is always good to see debate, and I vividly remember when we all got exited some years ago about Open Systems (with Open roughly being defined as anything running on Unix versus anything that was not running on Unix- including mainframes, AS/400’s, HP3000’s, etc. etc.). To be honest , that debate was maybe as productive as a debate about private versus public clouds may turn out to be. In my view the most important thing that the cloud can bring is namely that cloud (finally) decouples the application from the underlying infrastructure. As a result it matters a lot less where it runs (private or public). In a comment on Robin’s “Do We Really Need Private Clouds?” blog Jonathan Davis, CTO of DNS Europe, introduces a good example of that principle. Using a cloud platform (Applogic* in this case) his company makes it possible that applications can be deployed transparently and instantly over grids of compute capacity (re. discussion of grids, see Robin’s earlier post in the mentioned series) regardless of whether these clouds are private (hosted or internal), public or a combination (hybrid).

Where to start
The remaining Private versus Public question then is: where to start. Do you start with less sensitive applications on a public cloud and then expand what you learned to core apps onto maybe a private cloud, or v.v. do you start with a more sensitive app on a private cloud and expand to public when you feel that is proven and secure enough for that application.

Surprisingly (at least to me) there was some very clear guidance given in the cloud scenario session at last month’s annual Gartner Symposium/ITxpo**. Basically a kind of “comply or explain” approach was suggested there. It basically suggested: first explore whether a job can be done with a public cloud (“comply”), and only if there are valid and severe reasons to not go public (“explain”) than consider private. I’m paraphrasing so check with your analyst for exact wording and/or check out the free videos recordings of this year’s symposium at gartnersymposiumondemand.com  . Unfortunately the break-out sessions on cloud are not available for free there, but it does offer all plenary keynotes, all industry CEO interviews (Bennioff, Chambers, Ballmer, Dell) and … a recording of my session ;-).

During the symposium, Gartner also indicated that security concerns should be more seen as valid but temporary challenges to be addressed and overcome, rather than as a reason (or excuse) to discard public clouds. At last weeks Datacenter Summit one of Gartner’s lead analysts on cloud, Thomas Bittman, gave a slightly more nuanced view. Understandable as this wa sthe week of Thankgiving and the folks in the room were predominantly the guys running today’s private datacenters (Thanksgiving –> Turkeys 🙂 ). He highlighted some scenario’s where private clouds makes perfect sense (eg. stable, predictable loads). In his worthwhile and balanced session he also noted that the current emphasis on Infrastructure as a Service (where the Private versus Public debate mainly plays today) over Platform and Software as a Services comes from the fact that IaaS can run today’s existing applications and does not have to wait for a next generation of apps, as developing such new applications simply takes time.

A new generation of cloud applications
In my view this new generation applications will be very different from the applications we run today, which makes it even more important that these new generations of applications will no longer be tight to underlying infrastructures. New generations come with new applications, mainframes introduced OLTP, Mini’s or distributed systems (both open and proprietary) introduced departmental systems and later packaged applications like MRP and ERP and internet web systems introduced the age of e-commerce, where we started buying books and gadgets online and doing our banking on-line (remember when everything was called e-something). So “re-hosting” our existing apps to (either private or public) clouds is only a very small part of the long term cloud story. Not that next year, as I will address in my upcoming #IT2011 predictions, won’t be a very lucrative year for many vendors – including the one I work for – helping organizations move their existing applications to (hosted or internal) private clouds. Not to mention that 2011 is likely to be the year of the IaaS KillerApp TestDev-clouds, that make test/dev machines (a whopping 70% of the average IT organization’s machine park) available in a much more flexible, economical and ecological way. My college Marv Waschke wrote about this last week as a perfect way to gain experience with private and public cloud scenario’s 

But the big long term story in my view is that cloud will be ideal for a generation of new applications. Applications that allow organizations to collaborate with other organizations – so not the now much talked about in-company twitter and facebook clones that enable people to waste as much time at the office as they do at home. Through these new collaboration applications, organizations can take business processes that were traditionally done in house and source them as “as a service”. These processes can vary from bill collecting, invoicing, physical distribution, repair handling and HR to full manufacturing or product design. Having companies be able to specialize and offer these services to many organizations will enable them to achieve massive economies of scale. Note that many of these services which will be largely or completely information/software based. An example: Imagine the efficiencies of one company handling repairs for several large mobile phone manufacturers versus each company having to arrange their repairs themselves. Most phone manufacturers sell through the same resellers, use the same repair centers, source form the same Chinese factories, etc. etc. Hooking these up ones to a central platform used by multiple players can give an enormous platform effect. An early European player in the area is eBuilder.com (who BTW also run a cloud academy, but specific about these business aspects which they call “Cloud Processes for the Value Network”. At their site their head of product marketing also shares a handy list of 10 essential elements to create and run such a next generation cloud service). These types of new generation cloud applications will render efficiencies far beyond any pure IT savings or efficiencies imaginable (see also my earlier entry re. Gitex)

Now you may say that companies have started this move towards specialization and outsourcing of processes already some years ago. And you’re right, but so far they did so often despite of support in IT applications. Thanks to the cloud, IT can now become the big promoter, enabler and catalyst for this. In fact, I came across the scenario of “repair handling as a service” over a decade ago, while introducing XML for a previous employer. But back then the idea of bringing crucial functions outside the firewall and outside the realm of internal IT was just a bit too revolutionary. The fast growing acceptance of cloud computing as a model (often even more on the business side than on the IT side) is rapidly changing this. And to be honest, public cloud may clearly have an advantage over private cloud for such public services, as it is already located outside single companies proverbial firewalls.

PS many links in this one, if you read only one, make it Robin Bloor’s series at Cloud Commons, his havemacwillblog was one of the first IT blogs and this new series shows that experience does count.

*Disclosure: The cloud platform Applogic is a platform by 3Tera, now part of CA Technologies, my employer. DNS Europe spoke at the invitation of CA about their experiences with Applogic at VM World in Kopenhagen.
**CA Technologies is a premier sponsor of Gartner’s Symposium/ITxpo and Datacenter Summit.

It is the season of cloud events

This week, Gartner is holding their Symposium/ITxpo to Europe in Cannes .  For some reason the industry seems to be cramming all their events in the two short months that fall between the summer vacation period and Xmas. To name just a few, I’ve presented at: 

360IT in London

VM World Europe

Gitex in Dubai

Cloud Expo Silicon Valley 

Next week the part virtual reality, part physical reality UP2010 conference will also take place. 

Gartner wisely choose to not use the cloud word in this year’s symposium title “Transitions: New Realities, Rules and Opportunities“, but it is clear to most attendees that the majority of these transitions will be cloud related. Some may even argue (or fear) that the cloud will replace a large chunk of traditional IT. Having done the opening session of the cloud conference at Gitex, together with Gartner’s Jim Murphy, where he gave a short preview of the Gartner Cloud keynote, I can tell you that ignoring the cloud (because insecure, immature or risky in general) is not recommended. Enterprises will need to formulate a cloud strategy -with public cloud taking a prominent role – just as much as they needed to develop enterprise strategies for PC’s and end-user computing in the past.

At the event I will be talking, in conjunction with guest speakers from CA partners such as Accenture, salesforce.com and SAP, about “the cloud is the answer, it is also the question“.  The reason we invited these partners to present together with CA Technologies is that we feel that there are many areas where the answer will not come from a single company. Today’s IT is complex and – at least for the foreseeable future – developments like cloud and virtualization will add to this complexity. That is why we asked Keith Grayson of SAP how to address continuous and automated  governance, risk and compliance (GRC)  in this increasingly complex world. Steve Greene, leader of salesforce.com’s agile development, will share salesforce.com’s experience in moving from waterfall to agile development projects and Dr. Hauke Heier of Accenture will discuss the Investment Portfolio Management (IPM) Framework, a diagnostic model to help CIOs and top business executives align IT investments (cloud or non-cloud) with business needs (this session was overbooked 200% last year, so we moved it to a bigger room, but please register early).

From the above, it will be clear that both the questions and the answers around the cloud will, to a large extent, concern people and processes and not just technology.  This aligns nicely with the little informal pole I held among the attendees of my presentation sessions at the earlier mentioned cloud events. This year most came to do some ‘tire kicking’: seeing who else was there (apart from the numerous vendors), learn which end users have actually done something in the cloud and to decide what pilots to include in next year’s plan.  Most attendees were more interested in approaches and experiences than in the shiny, cool and technological advanced tools that many of the vendors were showing.  Maybe an idea for the organizers of these events to include specific tracks around these softer aspects in next year’s editions (which no doubt will take place in the same month. Old habits are hard to loose). In my sessions I already started to focus on these softer aspects (as I will do in Cannes) and focused on people and management, rather than on technology. People, Process and Technology … somehow it has a familiar ring to it.

Follow CA Technologies at ITXpo on Twitter at #caitexpo2010

On empowered users, rogue and shadow IT, stealth clouds and the future corporate IT

This blog wa soriginaly published at ITSMPortal.com
Cloud computing can be seen as an important enabler for more user and business empowerment. Traditionally we consideredany IT outside the IT department’s purview“Rogue IT” or “Shadow IT”.  In this blog we examine how further user empowerment may impact the future of corporate IT.

Rogue IT (sometimes called shadow IT or consumerism) is the phenomena in which employees outside the IT department deploy IT technology to achieve or automate certain tasks. Rogue IT is not a new phenomenon. But cloud computing is giving it a new runway and much better camouflage, since with cloud computing you do not need to secretly misuse a server that IT made available for another purposes or explain to your colleagueswhy you have all these computers under your desk.

The idea of IT outside the IT department is enjoying renewed interest. Ted Schadler of Forrester wrote an interesting article in the Harvard Business Review called “IT in the Age of the Empowered Employee”. A recent Forrester survey of 4000 US-based knowledge workers found that no less than37% are using do-it-yourself technologies . In his new book,Empowered, he calls these covert innovators HEROes — highly empowered and resourceful operatives. CBC news ran a feature and and CIO magazine picked up on the topic recently using the term Stealth Cloud.

Personal Experience
Personally I had my first encounter withrogue IT many years ago, during my first assignment in one of Holland’s largest multinationals. The company had a department called “Information Systems and Automation”, ISA for short, that used mainly mainframes to run corporate reporting and accounting.But there was also ISA-2, a divisional IT department, which ran operational and planning systems. Their platform of choice was PDPs and Digital VAX. At the production location where I worked (a massive manufacturing plant so far in the south of Holland that it was practically considered a foreign factory) we had ISA-3, a local IT team that supported office automation and printing using the then just emerging PC platform.  But all these were not considered rogue; this structure was a logical consequence of the fact that bandwidth at the time was so expensive that this was the best way to deliver IT services.

How easily we forget in the age of the cloud (and if your orgchart still looks like this it might be time to reconsider).  I met the rogue IT function called “ISA4” on the second day that I was working there. 
The plant at which I was stationed manufactured medical equipment for which it was necessary to calculate the exact trajectory of electrons. For this, the head engineer had obtained a portable (at that time state of the art) luggable, suitcase size UNIX system, that he took home at night so these calculations could be run while he enjoyed a good night’s rest. So far, so good. Were it not that on that system he also built a small inventory and quality assurance system that kept track of all the work-in-process in the factory. This data was manually typed into the corporate and divisional systems at the end of each month (these were “primary records” as my EDP auditing professor would point out). Now each of these pieces of medical equipment cost more than my car (my current car, not the piece of … I drove back then) and the final QA outcome (approved or scrap) was extremely important for the financial and commercial success of the manufacturing operation.  Yet, every evening this crucial data left the premises, only to arrive back, provided our lead engineer did not run into a streetcar on his way to work.

Moving on
Now this was many years ago.Since then, all systems have been replaced by bright and shiny ERP applications (first at the divisional level and later at the corporate level) and this factory has been consolidated, off-shored, outsourced and then insourced again and the type of equipment has long been replaced by a whole new generation of (you guessed it) digital technology. But I am sure that in that new factory,there are still users outside of the IT department building rogue solutions and applications, because that is what smart employees tend to do.

Not saying I am such a smart employee, but with most of the jobs I held since (in marketing, sales, business development, even some in IT) I managed to create my own rogue pet systems. It started by using a Windows help text compiler and the office CD writer to save time on faxing product information to 12 European offices each Friday night, then developing a Lotus Notes system for gathering enhancement requests, followed by a rogue intranet site (Hey, somebody gave me a fileserver password and IIS was already on there) and then a kind of precursor to Salesforce.com (a CRM intranet site for logging visit reports and forecast data). I somehow was lucky and my rogue systems never caught a virus, but a colleague of mine used an unpatched test version of SQL server for his pet project, that caught the famous SQL slammer bug and subsequently pulled down all SQL and network traffic in the complete enterprise (it was not a comfortable conversation with his boss and the head of IT the next morning).

Apart from these security horror stories, the problem with these industrious/innovative end-usersis that they (including myself) sooner or later get bogged down by the same thing that is slowing down IT departments: providing support and maintenance to what they created earlier. Thus, 70% of the overall IT budget is spent on “keeping the lights on”. As soon as a user has developed something really cool, for example, a way to use his iPhone to support his customers, five of his colleagues will want the same. Two of these colleagues may not have an iPhone but instead use a Blackberry (cutting our innovator’s development productivity in half, as he now has to build and support the functionality on 2 platforms). So he spends less time on his real job and basically becomes a type of IT person. This is why after a while –either when he gets fed up with the support job or when he moves to a new job (these industrious types tend to get either promoted or fired quickly, depending on the type of organization)-IT is called back in to … clean up the mess. Which results in the IT department having even less time or budget available to provide the type of innovations the business was looking for in the first place.

The cloud impact
The beauty (or danger) of cloud computing is that it allows business users to not only create such innovations by “hobbying around” on their phones or PCs. They now can go out and contract with outside vendors to create such solutions in a more professional – but still rogue-  way. Traditionally, business users had to go through the IT department for any such investment, as anything that had to run on the corporate network or servers had to be approved by corporate IT. With cloud computing the new applications however no longer run on the corporate network or servers, enabling these business departments to completely go outside.

Neither of the two described scenarios are desirable. We don’t want creative business people bogged down by maintenance tasks, nor do we want end user departments to contract with any IT vendor they like, bypassing any attempts we made at having any type of enterprise architecture in the process. But at the same time we do want this type of user led innovation to continue. Somebody (some department) however will need to guide and orchestrate these innovations. IT would be the logical candidate, but only if they can free up their time and resources away from “keeping the lights on” towards these more innovative tasks.  And again cloud computing, with its potential to deliver formerly complex IT tasks “as a service” to the IT department, may be just the recipe to free up IT’s time from the mundane, towards these  more innovative and differentiating endeavors.
End user computing

An interesting phenomenon in this context is end user computing and especially sharepoint. Here the IT department in many organizations seems to provide end users with a gun to shoot themselves with in the foot. Many users start enthusiastically, to find out after a year the maintenance is overwhelming, prompting them to abandon the project or start over. The fact that many of these sites are aimed at specific department makes this worse, as organizational structures tend to change yearly or even more frequently, rendering the sitesobjectives and design no longer valid. Now many IT folks will say: but then the design was simply wrong, if you mirror the orgchart, your system will always become obsolete, you should mirror the process or even better the data model as those tend to be much more stable. It is kind of a right brain – left brain thing. And that is exactly why it makes sense to involve the structured – systemic thinking of IT folks in helping users come up with solutions that match their needs. This however is only likely to happen if these IT folks are working inside these departments.

A well-documented case around this is the reshaping of IT done at Procter & Gamble by CIO Filippo Passerini. Under his leadership P&G outsourced a large part of their hard core infrastructure tasks several years ago. The majority of the retained IT people were reallocated(also physically) to work in business departments like marketing, product development, sales etc. Together with their business colleagues they started creating new solutions and approaches to both operational and strategic issues, like creating a closely monitored social media / advertising campaign as recently described in Fortune magazine (hardcopy only). In this way IT’score strengths like structured thinking and problem solving capabilities,start toplay a crucial role in the overall success of the enterprise again. This however only happens after a large part of the repetitive infrastructure related services that traditionally keeps IT busy,  are supplied as a utility, eitherby using an outsourcing construct or by leveraging the cloud.

Time for Hybrids
This approach of IT and business working closely together was one team instead of in silo’s seems only logical, but reality in a lot of industries is that IT actually became more siloed in the last decade. A case in point, personally I was one of the first graduates of a new curriculum called IT & Business, which consisted of – you guessed it – 50% IT and 50% business and economics subjects. The goal of the study was to develop hybrids, people able to straddle and connect business and IT (either working as IT person in a business department or as a business person in an IT department. After my study I started in IT in the pharmaceutical industry and quickly discovered that there were only two departments recognized by management as strategic to the successof that industry: sales and research (funnily enough in that order). IT came in a long range of supporting departments that included finance, manufacturing, logistics, HR, catering and yes, also IT. Being young and having unmatched faith in the power of IT technology I moved over to an industry where IT waspretty core (the IT industry itself) and rapidly found out that my study – that straddled IT and Business – also came in quit handy when trying to sell or market IT stuff to business people.

During those years on the vendor side I observed a distinct widening of the chasm between IT and business. While early in my career I found the IT manager often was the best person to talk to  get a fast understanding of what a company did (as he worked with many departments, often having developed the applications they used himself in some kind of 4GL language during prior years). With the proliferation of standard ERP packages, 3 tier client-server, java and service oriented architectures, IT became more complex. More about technology and less about what the company did.Some may even argue that the worst culprit in this scenario was Java. Traditional 4GL’s and even COBOL aimed to be like plain English, so from using those languages to speaking to users in plain English about their business was not a big step for most IT folks., something that cannot be said for JAVA and it’s typical user.

Objects may appear further away
Now some of my observations may have been distorted because during the time I observed this chasm growing larger, I moved on the vendor side from marketing 4GL based MRP to Object Oriented ERP, to enterpriseintegration using XML and SOA and more recently to IT management technology. However, when speaking to former colleagues in the application space,they agree the profession has become more complex and more about technology. Sure they work with users everyday, but mainly to make them stick to agreed project plans and to make sure they adhere to the predefined workings of the selected standard packages, often not to invent new creative ways to do something completely differently (although many of them would love to).

Another testament to this mind boggling increase in complexity is the fact that the IT management industry (the solutions needed to manage IT itself) has surpassed the application industry (the solutions used to manage business processes) in revenue about a decade ago. Companies are now spending more money on keeping It running then on doing business things with IT. A weird and worrying statistic. The cloud now however has the potential to change this. Both because of the abstraction from the technology that cloud, virtualization and it it’s sibling technologies can deliver, but also because  enterprises are starting to realize they need to take a distinctly different, more business outcome oriented approach to managing cloud IT than what became standard for managing traditional IT. As described in other articles I see a big role for a supply chain approach to managing IT as this will free up in-company IT talent to truly engage in business matters again.

Time to choose sides?
So, If you are currently on the business side you may – with the services companies deliver to their customers becoming more and more digital (in telecoms, media, finance, government, education, etc. etc.) consider letting more IT people into your ranks. Never mind we talk funny and have a bad hairday every day  – to name a few stereotypes. With your business becoming more about shipping bits instead of atoms to customers now is a good time to start adding more IT skills to your team (if only to keep your rogue innovators productive).

If your currently on the IT side you may want to read this thought provoking in-depth study on The Future of Corporate IT.  I came across in this blog, which is from a think tank consulting firmcalled the “corporate executive board”. In their five-year outlook for corporate IT they come to some astonishing conclusions: “The IT function of 2015 will bear little resemblance to its current state.  Many activities will devolve to business units, be consolidatedwith other central functions such as HR and Finance, or be externally sourced. Fewer than 25% of employees currently in IT will remain, while CIOs face the choice of expanding to lead a business shared service group, or seeing their position shrink to managing technology delivery”.Funny how that all started more then 25 years ago.

GITEX 2010: Clouds in a country where it never rains

Last week Jim Murphy of Gartner and I opened up the Cloud ConfEx, an executive conference that ran as a part of the GITEX tradeshow in Dubai. Personally, I had not been in the region for the good part of 10 years, so I was very interested to see how it had developed since. Last time I was in Dubai the Burj al Arab was there and so were some of the other landmark buildings. But the artificial islands called The Palm and the tallest building in the world, the Burj Khalifa, did not even exist as an idea back then. I went to Dubai with questions about how high the interest in cloud computing would be and what it’s potential might be in an emerging region and growing economy like the United Arab Emirates.
The initial feedback from the market research we are completing on cloud computing in emerging markets (to be published in a few weeks) shows some reluctance and apprehension towards private cloud, but a significantly higher interest in public clouds. Jim Murphy gave Dubai a 24 hour head start on the rest of the world by presenting the Cloud approach that Gartner launched to the rest of the world at their ITxpo event in Florida a day later. Prior to Jim’s presentation, I talked to the audience about how cloud can be especially beneficial for doing new things. Which means that fast growing markets, where a lot of new consumers are entering the market and new services are introduced daily, can benefit a lot more than markets where it is more about consolidation and making current services more efficient.

A valid question in this context is what this new generation of cloud services will be about. Mainframes were about (batch and OLTP) transaction processing, distributed systems were about departmental and integrated planning systems, web-systems were about e-commerce (remember when everything was called e-something?) and cloud will be about …? My feeling is cloud will be about collaboration, and with collaboration I don’t mean offering twitter or facebook clones to employees so they can waste just as much time at work as they do at home :-). I mean collaboration between multiple parties (organizations, enterprises) in global supply chains. Modern companies will not be vertically integrated like the Ford Motor company was at the beginning of the last century. They will not do ALL their design, production, distribution, planning, marketing, sales and accounting themselves. They will consume many or even most of these as services. The cloud – being internet-based by design – is ideally suited for that. In my session I went on to discuss the impact of this on traditional IT aspects like security, automation, management etc.

But back to the region… . The next day I was invited to present cloud computing at the Colleges of Higher Technology in Abu Dhabi. Abu Dhabi is the largest of the Emirates and the Colleges of Higher Technology are an important source of management talent for the region and work closely with many well-known and esteemed universities and research institutions around the world. I won’t mention the many world leaders and Nobel prize winners that taught or lectured there as I don’t want to imply I somehow would fit on that list, but it is clear the region takes higher technology education very seriously. And not just for men but also for women, our session actually took place at the Womens College, with a curriculum stimulating students to start their own companies and if needed be able to find work they can do from home. Abu Dhabi is currently planning a new state-of-the-art and very environmental friendly new campus (which is no small fact in a country where the average day temperature is above 40 degrees centigrade and where the need to save energy does not come from a lack of local natural resources.)

What is interesting is the region’s natural tendency to institute an ecosystem of service providers. The Colleges of Higher Technology provide central facilities to about 20 local colleges in addition to an international executive training program. We see the same in other types of infrastructure — there is an organization providing IT services to many of the government agencies and companies, such as the local oil and energy companies. In an economy where investing the proceeds of current economic activities into growth and future economic activities is the model, the planning of central services and aggregate providers comes natural. Not that it feels anything like the old archetypes of planned economies with their 5-year plans we know from other regions. In fact one could argue that using a provider-consumer model seems almost designed to take advantage of the cloud model.

A little example of what I mean by central services. At some point I managed to lose my suitcase in the trunk of a local taxi. While my luggage was touring the beautiful city of Dubai by itself I started a quest to retrieve it. Turns out the various taxi companies in Dubai (there are about 5, recognizable by different color roofs) all use the same dispatching service, have a joint lost and found department (conveniently located at the airport) and the location of any taxi can be tracked throughout the whole state using a combination of GPS and satellite systems. The only glitch was the fact that a call center was involved (which was as useful and helpful here as call centers anywhere else in the world), but having found a way to circumvent that and talk directly to the central services (by using the personal phone of another taxi driver) I was able to rejoin with my suitcase before leaving later that night.

As my suitcase now has seen more of Dubai than me I guess I need to go back soon. Which is not unimaginable, given the potential the region seems to have for cloud … and despite the fact it only rains about once a year.

Counting down to Cloud Expo west

Just back from Gitex (more about this later) and it is only one week to go to Cloud Expo west, which starts Monday November 1st. I will be speaking in track 4 “Real-World Cloud Computing & Virtualization” on Wednesday at 6:25 PM (the cloud is no 9-5 affair) about How Cloud and Virtualization Are Changing the Way Business and IT Will Interact.

After speaking about this in Prague in June I was invited to give my perspectives also at the event in Santa Clara. Only 5 months have passed, but in cloud terms that is a life time. Come by and check out what has changed. If you cannot wait for that, have a quick peek at the below conversation I had with Brian McKenna of Reed publishing at a recent event in London (which by the way is tipped as the location for Cloud Expo Europe 2011). It was after “The BIG debate: Outsourcing versus Cloud” a keynote panel with representatives from HP, IBM, Symantec and CA. For those keeping score: the cloud won!.

PS unlike in Europe I wont be the only speaker form CA Technologies at Cloud Expo, far from that. As you may have noticed (and unless you were on a cloud or under a rock I cannot imagine how you may have missed it) CA did no less then 6 cloud acquisitions recently (see CRN’s graphical review of these). Most of these acquisitions will be represented at Cloud Expo in some shape or form. That’s the thing with aquiring technology innovators, they are determined and industrious, so they do not change their plans to tell the world about their great ideas, just because somebody acquired them.

We however managed to convince most of them to share space on the expo floor. The fact that this also made them a partner in the party organised by 3tera did help (that’s another thing with innovative startup’s, they sure know how to throw a party, just ask the crowd that came to the 3tera party at VMworld Europe two weeks ago). I can’t disclose the theme of the party yet – as large companies like ours have strict rules on sharing insider knowledge – but can tell you it is Tuesday evening and you need to come by Stand 100 for a ticket. Hope to see you there, BTW the sessions by my colleagues are:

Is There a Right Way to Build a Cloud?
Prabhakar Gopalan, Mon. Nov. 1, 4:40-5:25pm

How the Cloud is Changing the Role of IT and What to do About IT
Jay Fry, Tue. Nov. 2, 2:25-3:10pm

Advanced Cloud Architectures: Security by Design with Zero-Trust Network Architecture
Peter Nickolov, Tue. Nov. 2, 5:35-6:20pm

How Cloud and Virtualization Are Changing the Way Business and IT Will Interact
Gregor Petri, Wed. Nov. 3, 6:25-7:10pm

Making cloud and equal partner in the enterprise (bootcamp session)
Jay Fry, Th.Nov.4, 9.55 – 10.40 am

Changing the Game: How to Get a Cloud Service Up and Running in Less Than 30 Days
Adam Famularo, Th. Nov.4, 1:35 – 2:05 pm