Designed for failure, built on a next generation application platform, deployed on commodity infrastructure – all phases you hear a lot when people are talking about deploying or migrating applications to Cloud-based computing and Infrastructure-as-a-Service providers, but what about applications that aren’t?
I’m a firm believer that the set of applications that we use are going to be very different in the future. Compare the applications that you use today with the ones you might have used 5 or even 2 years ago and you will notice that there has been a dramatic change.
In my own case, in the past it seems like most of my time was spent using various components of Microsoft Office. Today I’m spending a lot of time using web applications in the browser, like right now whilst writing this using Blogger, or when I’m using applications like Salesforce.com and Twitter. I also use a lot of single function applications that are interconnected with API integrations. Small tools like Tripit and Concur, which are usually available on a variety of different platforms. Other apps on mobile devices have replaced Microsoft Outlook as my default tool for email, calendaring and task management. Finally, I’m accessing a small number of important legacy applications using a VDI client – for me this is limited to just a couple of applications. This change in the application portfolio is a hugely significant, but it predominately impacts my end-user computing experience on my laptop and the other devices that I use.
So what about in the data center? In my view the situation in here is very different. Firstly and most obviously, the majority of enterprises have a lot of fixed data centre infrastructure that is dedicated to running a portfolio of applications that support and automate various aspects of the enterprise’s operations. These applications typically have an extended lifetime – 10 years is not atypical and the lifetime of applications can extend much longer in some circumstances.
I think everyone that works in IT can cite at least one example of a large enterprise that is unable to retire an ageing, but critical, application due to factors beyond their control. These issues might include missing source code, lack of skills to complete a application migration or retirement, or the lack of of a robust business case that justifies spending on migration activities.
For the data centre applications that reside on x86 platforms, server virtualisation is obviously an extremely effective and widely used mechanism for improving the efficiency of resource utilisation. Where virtualisation is fully deployed it can also offer lots of other benefits, such as improved system recoverability and DR capabilities, improved service levels and flexibility and a reduction in the time taken to deliver new services. These additional benefits, which are additive to the basic improvement in resource utilisation that virtualisation offers, are critically important when virtualising the infrastructure supporting critical application workloads with demanding service level requirements. It’s typical for enterprises to start using virtualisation to achieve better utilisation levels for non-critical applications, but to quickly move to virtualising tier 1 applications and delivering incremental benefits.
What about when a situation where an enterprise wants to move these workloads to a cloud provider? This is where the specific capabilities that the provider can offer become very important and the stratification effect comes into play in the senses that there are two distinct classes of workload. You could categorise these in a couple of difficult ways. You might look at them as workloads that matter, or are in some way critical, and workloads that don’t matter and aren’t in any way critical to the organisation that they support. An alternative view might be to think about them as applications that have the ‘designed for failure’ characteristic that I mentioned at the beginning of this post, and those that are much less tolerant of failures within the infrastructure and platform services stack.
I’d suggest that today, most of the computing and storage resources within a typical enterprise are deployed to support applications that are important to the organisation and don’t have the kind of failure tolerant characteristics that lend themselves to deployment on top of infrastructure with less predictable performance and availability characteristics and that tends to fail frequently. This means that today most workloads that are being deployed on service provider provided infrastructure need to be deployed on enterprise quality infrastructure, which is why VMware vSphere and the associated management stack that sits around this to provide high availability, disaster recovery, performance management and integration with other enterprise platform features is the best option, for now at least. The lower cost options that don’t have these enterprise features have their place, but that’s not taking an enterprise’s current tier 1 applications and running them within a Service Provider.