It’s about services… right?

October 20, 2011

A lot of the initial focus in cloud has been on environment deployment. Specifically, it seems like we spend quite a bit of time talking about how cloud building blocks such as virtualization, automation, and standardization will lead to the kind of nimbleness and efficiency previously very hard to attain. While these are all valid and very important points, I think it’s time we push the conversation forward concerning what elements of cloud users should expect to derive value from — especially in the PaaS arena.

I think we can all agree that users leveraging PaaS solutions will benefit from the standard qualities we have associated with cloud for the past few years. In other words, there is certainly nothing about PaaS that jeopardizes rapid deployments, better asset utilization, improved consistency in deployed environments, etc. It is important to recognize though, that the value of PaaS goes quite beyond these table stakes benefits. PaaS has the potential to be transformative in nature, and much of that potential comes down to application services.

Even though application services are a core part of the value proposition of PaaS, they are not talked about much. So what are they? Well, PaaS implies that the platform provides some set of embedded services to your applications. Applications may exploit those services programmatically, or those services may be a crucial element in ensuring the quality of service requirements expressed by an application.  As an example, a PaaS solution may provide an embedded cache service that your application can use to store important contextual state. Another example would be a service that scaled your application in and out to meet needs. In any case, I contend that the application services provided by a given PaaS solution ultimately determine its long-term viability and potential value to users.

Given my belief in the crucial role of application services in a PaaS solution, I also have some thoughts on important considerations in the delivery of such services. For those of you considering PaaS solutions, I would suggest you examine the set of application services delivered by each platform and take into account at least some of these considerations:

–  Service transparency: The configuration and management of application services provided by the PaaS solution should be completely transparent to the consumer of the service. For example, if I am deploying an application that intends to make use of a platform-provided cache service, I should not have to do anything other than indicate that my application relies on the caching service. The platform should handle configuring the cache service and managing it over time to meet the needs of my application. This is a key facet of moving to a more declarative environment creation and deployment model. After all, it’s not much of a service if consumers have to define and manage it!

Service sharing: As much as possible, the PaaS solution should have the means to share services among multiple applications without sacrificing quality or security. If a platform runs a scaling service that is capable of managing and initiating scale in and scale out events for applications, it would probably be a waste of resources to have to deploy an instance of that service for every application deployment. Instead, a single logical instance of the scaling service should be able to accommodate a reasonably sized set of application deployments. After all, cloud is about using only what you need when you need it.

Service governance: While the set of services provided by a PaaS solution should be as transparent as possible to the consumers (application owners), that does not mean that someone does not need a lens through which to watch these services. Authorized users should be able to manage and monitor service resource consumption, manage service exposure, and meter service consumption amongst other things. Another important consideration in this regard is the ability of the PaaS solution to attribute service usage to different users or groups of users. Without this, you will have a significant hole in your cost allocation or chargeback model.

While these certainly do not represent everything you should look for in PaaS application services, I do believe they are among the three more important qualities.  We are definitely early on in the PaaS delivery and adoption movement, but I believe we can already see the importance of application services for these solutions. Many platforms are already offering a basic set of services, and over time I would expect a normalization of core application services to appear among the varying PaaS solutions. I will be watching, and I am excited to see and hopefully be a part of the coming innovation!

Unwrapping the self-service cloud

October 4, 2011

If you get a chance to talk with directors or C-level executives about the benefits they expect to derive from adopting cloud computing techniques, expect to hear the terms ‘flexibility’, ‘agility’, and ‘cost reduction’ come up quite frequently. While those are valid expectations, those of us a little closer to the trenches know that it takes a number of different technical capabilities to actually deliver those benefits. Oft-talked about capabilities such as elasticity, rapid provisioning, and configuration automation all come to mind. However, there’s one more capability that we tend to talk about a little less, but nevertheless is equally as important. That capability is self-service access.

It seems like lately, more and more of the clients I have been visiting with are coming to terms with what self-service means to them and what it will take to enable it. Here is the unvarnished truth about effective self-service in the enterprise: It is far from easy! Numerous obstacles stand between the way traditional IT works and a truly self-service organization and they are not the kind of obstacles one can blithely ignore.

A single post could never explore every single challenge, but I would be remiss not to mention the big ones, starting with the collective fear of losing control. For years and years, IT organizations have arranged themselves in teams that are more or less domain specific. To put it a little more bluntly, if you look into any typical shop, you will probably find an infrastructure team, systems software team, middleware team, application team, and more. In most cases, well-defined processes (e.g. ticket requests) specify how these teams interact. The interaction is typically limited to the boundary of these interfaces, and each team more or less independently handles the domain over which they reside. Can you see how self-service may be an affront to such a structure?!?

Remember, when we talk about the type of self-service often bandied about in cloud computing, we are not talking about your garden variety self-service access. Self-service in the cloud normally means that a single user is provisioning (in a loose sense at least) everything they need to run a particular workload – from the hardware to the application. To say this causes some amount of organizational strife is a bit of an understatement.

Of course, traditional inter-team relationships are not the only barriers standing in the way of self-service models. On a recent trip, I listened to a client explain their intentions to move their development and test operations to a cloud-based environment. In this environment, developers and testers who needed application environments would directly provision them from a standard offering catalog. In the discussion it soon emerged there were many different sub-teams within the development and test teams and that made for special considerations in the sharing of resources. For instance, some teams required more resources than others. Some teams required access to systems outside of the cloud. Still yet, some teams needed to dynamically expand their consumption of resources even at the expense of the other teams being able to consume those resources. In short, there was a complex web of resource consumption needs among the teams.

Now, you may ask, ‘What does this have to do with self-service access?’ Well, there is absolutely no way you can expose these complex dependencies of resource relationship to end users (the developers and testers). How far do you think this company would get with self-service deployments if the deployer had to figure out what resources (hardware, storage, networking, and software) they could safely use before doing anything? Right, not very far! So the trick is, by the time a developer or tester logs in, the decision about the resources to which they are entitled must have already been made. This implies a complex system of rules that considers the entitlements of the current user in relation to all other teams in the organization. And don’t forget, these entitlements could very well change over time.

This may sound like I am constructing a false barrier to self-service, but I can assure you these sorts of resource sharing requirements are not at all unique. The basic problem is not one that is easy to solve, and it is even harder to do so in a way that is somewhat consumable to an administrative user in charge of the whole thing. Yet, without reasonably evolved resource sharing capabilities, there is really no way to enable self-service access to multiple different classes of users across a shared resource pool.

Given the two (but not only) significant self-service adoption obstacles of cultural churn and effective resource sharing, what are cloud providers to do? First, I believe it is important for cloud providers to acknowledge the typical division of responsibilities in an organization. When designing a solution that will harness resources that fall across traditionally isolated domains, it is important that the design accommodates different types of users. Accommodation means different users act on different resources (and those resources only), and it means that users are presented with a familiar context.

On the resource sharing side, every different resource that makes up the cloud must have associated access rights tied directly to users or groups.  This is certainly not a novel concept, but you may be surprised at how often a particular solution overlooks or under-delivers on this point. It is not enough to simply say that a user has access to a particular resource. One must be able to partition a resource and assign those logical ‘slices’ out to different users or groups. To evolve the concept further, those slices should be able to dynamically grow and shrink based on defined conditions or rules.

There is little doubt that self-service is a critical aspect to the cloud, and it is in fact a key capability in delivering on the promise of cloud. Having said that, I believe there is a lot of room for maturity in this specific area, and providers will have to address the challenges I mentioned above and a host of others. Taking all of this into consideration, I can confidently say that we will see quite a bit of focus on this as we move forward in cloud. What do you think?

The value of workload-aware management

September 23, 2011

A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.

Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:

Me: IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?

Marc: To sum it up, we offer a combination of depth and breadth.  IWD delivers both workload aware management and general purpose management.  Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context.  There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools.  This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity.  By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.

That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun.  As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer’s entire environment, we offer a mix of workload-aware management and general purpose management.

Me: VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?

Marc: I think VMware has built a very compelling set of capability in the virtualization space.  I think the main difference between VMware’s suite and IBM Workload Deployer is the perspective from which the environments are managed.  VMware puts the administrator in the position of thinking about infrastructure from the ground up.  The administrator is thinking about virtual images, hypervisors, and scripts.  In IBM Workload Deployer, we think about things from the perspective of the app, because that’s ultimately what the business cares about.  By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.

Me: The ‘one tool to do it all’ approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?

Marc: The advantages of a “one tool to do it all” are many: less integration, more uniformity, less complexity.  As such, customers will always prefer a single tool when possible.  This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the “everything else.”  As such, my advice to users is not to choose between breadth and depth – use IBM Workload Deployer which offers both.

Me: To close, I’m curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?

Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm.  There are several reasons for this.  One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged.  Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models.  However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data.  Just look at IBM’s Strategic Outsourcing business, which handles entire IT operations for many large businesses.  I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption.  In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public.  When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.

Me: Thanks Marc!

What is an application?

September 8, 2011

What is an application? Well, like many things the answer varies widely based on who you ask. Consider the case of an online ticket booking application. If you ask the users of the application what the application is, then it’s a web page or a series of web pages. If you ask the application development team, they will say it’s a collection of source code, configuration files, static content, etc. The infrastructure team will tell you the application is the application server, web server, and operating system. If you go further and ask the hardware folks what they think, they will likely say the application is the virtualization platform running on their server. As you see, a question seemingly innocuous as ‘What is an application?’ is actually anything but.

Now, you may say, ‘Who cares?!? Is it important to agree on what an application is?’ In general, I would say no. In my mind, it is not all that important that different facets of an organization agree on what they consider an application to be. As long as they can communicate their view point to others, everything should proceed reasonably smoothly. However, in the world of PaaS, I believe that an agreement on what the term application means is of paramount importance.

I have discussed the many different approaches PaaS providers can take concerning the delivery of their product. I think the particular approach one chooses is a critical choice, but a look at the bigger picture reveals a larger set of challenges. The move to a hybrid cloud approach is a shift that seems inevitable. This shift to hybrid clouds will encounter many issues, and the shift to hybrid PaaS environments will not be exempt. From a technical standpoint, one of the key obstacles that hybrid cloud computing needs to confront is the need for meaningful commonality of management. We all know that the key unit of management depends on the type of cloud (IaaS vs. PaaS vs. SaaS), and in the context of PaaS the application is the focal point.

This brings us back to my original point of defining an application. How can a hybrid cloud platform for PaaS provide meaningful commonality of management if there is no general consensus on what constitutes an application? The answer is simple: it can’t.

Let’s consider a simple example to put some concreteness around this. I have a hybrid PaaS platform, and I log into the management console to manage an application that is potentially running in multiple cloud environments. I decide that I want to attach a policy to that application that describes scaling behavior – perhaps based on response time. There are two things that can happen at that point. Either they hybrid cloud platform takes this directive from me and translates it for each and every cloud provider, or the hybrid cloud platform sends this information in a manner every provider can interpret and enforce.

I think, or hope, that we would all agree the latter option is more tenable. If a hybrid PaaS tool can only give meaningful management if it knows lots of details about each underlying cloud platform, well, we won’t have many hybrid PaaS tools. That said, there is no way for the latter option to occur in the absence of some sort of standard. That includes a standard (I don’t mean standard in the literal IT sense necessarily) of what an application entails. After all, how could you enforce a scaling policy on an application if you cannot determine what an application is??

Considering this, what would you say an application really is? Is it a grouping of HTTP URIs? This seems to be the easiest way to describe an application but would be lacking in many scenarios (EJB applications to name but one). I wager that if you really sit down and think about a common way to describe an application you will end up debating with yourself! I think the issue of defining an application is critical toward the ongoing success of PaaS, and I also think it serves as a reality check for hybrid clouds in general. While there is definitely a larger shift toward the hybrid approach, we clearly have a long way to go.

Revisiting PaaS delivery models

August 23, 2011

Quite some time back, I explored the idea of different delivery models for PaaS solutions. To sum it up concisely, I believe that PaaS providers have to make a decision when they deliver their solution. They have to decide the degree to which their solution supplies inherent knowledge of a set of platforms versus the flexibility that solution delivers. With that in mind, I proposed that we can generally characterize PaaS solutions in the following way:

1) Platform and service depth with little breadth

2) Platform and service breadth with little depth

3) Platform and service depth with breadth

When examining option one or option two, many consumers will feel that they are making a compromise, and not without reason. Option one delivers a PaaS solution that provides significant inherent knowledge about a particular platform or group of platforms at the expense of being well-rounded. This means that users can setup some cloud platforms very easily and with little effort, but they cannot address all of the platforms necessary in their organization. Option two is just the opposite. It does not supply much platform specific knowledge at all, but rather it is flexible enough to address a large swathe of runtimes. Of course, this means users end up being the supplier of the platform specific knowledge, so adoption and implementation typically take significantly longer.

Given all of this, in my original post I was of the opinion that option three is the most appealing. I still believe that today. PaaS solutions are plenty disruptive on the cultural front, so minimizing technical hurdles can go a long way towards spurring on adoption. If users can get a PaaS solution that lets them address a significant subset of their organization’s platforms with minimal technical work, coupled with the ability to address other platforms with some customization work, I think they will naturally gravitate that way.

Unfortunately, the reality is that delivering such a solution is not an easy thing to do. From a provider standpoint, it is easy to fall into a trap whereby you unwittingly build a solution rooted in the context of just a select few platforms. This could surface itself in management interfaces, deployment models, and user interface terminology just to name a few places. Many times this happens when you build the PaaS solution to address a specific subset of platforms, and then work backwards to make it an open system. This just does not work. As a provider, you have to build the open platform, and then provide the content necessary that customizes the system for the subset of platforms you choose.

This is where I really think IBM Workload Deployer gets it right. From day one, the subsystem that deals with virtual application patterns has been based on an open architecture. Every virtual application pattern is a collection of one or more plugins that provide the knowledge of how to install, deploy, and manage application environments in a cloud. IBM provides virtual application patterns (built on provided plugins) right out of the box for a selected set of runtime environments. Users can create virtual application patterns for other platforms they want to deploy and manage using IBM Workload Deployer by contributing the foundational elements — plugins.

The ability to create and load custom plugins into IBM Workload Deployer has been a feature of the solution since its announcement earlier in the year. So, why am I just bringing this up? Well, last week IBM published the IBM Workload Plugin Development Kit that provides tooling and documentation that guide users through the plugin creation process. The tools and guidance included in this kit provide the framework within which users will be able to create any kind of plugin to support any type of virtual application pattern they want to create. This kind of tooling makes it feasible to have a PaaS solution that provides both depth (via the patterns and plugins provided by IBM) and depth (via the patterns and plugins provided by users).

My advice to you is that regardless of the solution(s) you are looking at, challenge your cloud providers in terms of the value they deliver and the flexibility they enable. Look for those that give you out-of-the-box value without handcuffing you to a particular set of platforms. Time to value does not have to come at the complete expense of flexibility!

An eye on the competition

August 9, 2011

When it comes to IBM Workload Deployer, I have no illusions regarding the veracity of our competitors. They are out there, and they are constantly on the attack. Their dubious claims aside, I know this because I still get asked quite frequently to explain the benefits of IBM Workload Deployer versus some other general purpose cloud provisioning and management solution. So, while I have done that many times in various forums, I figured it was time to yet again address this question.

 

When comparing IBM Workload Deployer to the other available solutions, I honestly feel comfortable saying we have no direct competition. I know you believe me to be biased, and rightly so, but let me explain why I think the competition is much more perception than reality. To do this, I want to focus on the patterns-based approach that IBM Workload Deployer takes to cloud provisioning and management.

 

Let’s start with virtual system patterns in IBM Workload Deployer. Virtual system patterns allow you to build and deploy completely configured and integrated middleware environments as a single unit. These patterns build on top of our special IBM Hypervisor Edition images that bottle up the installation and quite a bit of the configuration of the underlying middleware products. Further, when using virtual system patterns, IBM Workload Deployer manages and automates the orchestration of the integration tasks that need to happen to setup a meaningful middleware environment. For instance, when deploying WebSphere Application Server you do not need to do anything on your end to deploy a clustered, highly available environment. When deploying WebSphere Process Server in this manner, you do not need to take any administrative actions to produce a golden topology. You just deploy patterns and the images, patterns, and appliance take care of the rest. Of course, you can add your own customizations and tweaks in the pattern, but we take care of the common administrative actions that would otherwise require your care.

 

I am not sure of a better way to say it, so I will be blunt: When deploying products delivered in IBM Hypervisor Edition form, no other solution compares to the virtual system pattern capability offered by IBM Workload Deployer. It is not even close. Can you provision products like WebSphere Application Server or WebSphere Portal using other cloud provisioning tools? Sure, but you should be aware that you will be writing and maintaining your own installation, configuration, and integration scripts. It is also likely that you will end up developing a custom interface through which deployers request your services (something not necessary when using the rich IBM Workload Deployer UI). All of this takes time, resource, and money. More importantly, this is not differentiating work and distracts from the real end goal: serving up applications. IBM Workload Deployer can deliver this operational capability right out of the box, and it can do so in a way that costs less than custom developed and maintained solutions.

 

When considering IBM Workload Deployer versus the competition, it is also important to consider the new virtual application pattern capability delivered in version 3.0. The virtual application pattern capability is a testament to IBM’s thought leadership in, and commitment to cloud computing for middleware application environments. Virtual application patterns take a bold step forward in raising the level of abstraction beyond the middleware environment and up to the most important resource in enterprise environments: the application. With a virtual application pattern, you simply provide your application and specify both functional and non-functional requirements for that application. When ready, you deploy that pattern, and IBM Workload Deployer sets up the necessary middleware infrastructure and deploys the provided application. Moreover, the appliance will monitor and autonomically manage the environment (i.e. scale it up and down) based on the policies you specify. Quite simply, this is a deployment and management capability our competition cannot match.

 

There is more to consider than just patterns though. The appliance makes it really simple to apply maintenance and upgrades to environments running in your cloud. It can autonomically manage your deployed environments (through policies in virtual application patterns and the Intelligent Management Pack for virtual system patterns), and it effectively abstracts the underlying infrastructure of your cloud environment. This abstraction is the reason IBM Workload Deployer can deploy your environments to PowerVM, zVM, and VMware environments. It also makes it easy to deploy the same environment to multiple different underlying platforms, thus accommodating typical platform changes that happen as an application moves from development to production. The best part of all is that the deployer’s experience is the same regardless of the underlying infrastructure since the appliance hides any platform idiosyncrasies.

 

The bottom line is that the appliance is purpose built to deploy and manage middleware and middleware application environments in a cloud, and as such, delivers immense out-of-the-box and ongoing value in this context. I should also point out that the design of the appliance acknowledges its purposeful nature. The CLI and REST API interfaces allow you to integrate the appliance into the operations of those general purpose provisioning solutions. In this way, IBM Workload Deployer acts as a middleware accelerator for your cloud computing efforts. This means that if you do have a general purpose solution, IBM Workload Deployer can still provide considerable value and let you avoid developing a considerable subsystem dedicated to deployment and management of middleware in the cloud. We believe in this type of integration, and have in fact built it into our own IBM solutions.

 

There is certainly more to IBM Workload Deployer and its differentiating value, but I think the above is a good start. When it comes down to creating clouds focused on middleware platforms and middleware applications, nothing stacks up to IBM Workload Deployer.

How are you building your cloud?

July 30, 2011

“What are the use cases you want to pursue with your enterprise cloud project?” The question seems innocuous enough, but I can tell you from first-hand experience that it is anything but. If you asked five people from different teams within the same company to each give you three use cases, you would end up with 15 different scenarios. Such is the case in the current enterprise landscape where points of view on cloud vary about as much as you can imagine. Each organizational fiefdom has its own set of pain points, challenges, and initiatives to tackle, and each one of those teams have an idea on how the cloud can help.

Being on the service provider side, listening to users roll off a burgeoning list of usage scenarios is equal parts exciting and disconcerting. I find myself caught somewhere between, ‘Wow, that sounds cool and fun!’, and ‘How am I going to show that in an easy, yet valuable manner?’ It is not an easy task. Lest you think I am out looking for pity, I sympathize even more so with the company. I especially feel for the person on the consumer side who gets the task of leading the ‘enterprise cloud initiative.’ Talk about a thankless, exhausting, and challenging role. On a given day, the only team that is rallying behind you is the one who thinks you are about to implement the product they are championing!

Being a part of many of these endeavors, I like to think that I have picked up a lesson or two applicable to most cloud implementation projects. From the service consumer side, I feel compelled to warn about what seems to be transforming into an anti-pattern among cloud adoption. While it is fine, and in fact advisable, to have an enterprise cloud strategy, it is not usually so wise to attempt to fulfill that strategy with a single tool. This comes back to a question I have posed before: Do users need a single tool or a single pane of glass? Well, judging from the projects I have seen so far, users need a single pane of glass far more than they need a single tool.

The problem with adopting a single tool that purports to deliver on every one of the use cases for an organization’s cloud strategy is that implies quite a bit of work on the user’s part. It should be pretty obvious that as a tool’s flexibility increases, its complexity usually does as well. From a cloud provisioning and management standpoint, this usually means that you will be doing a lot of scripting for installation, configuration, integration, orchestration, and management. It’s not that these tools do not have their place, but my main caution to potential users is to carefully consider just how many use cases they can realistically address with this approach given their timeline. I have seen a few projects go way over their target dates due to severe underestimation of the effort required for this kind of work. Oh, and don’t forget all those scripting, orchestration, and managmenet assets require care and feeding over time, so it’s not just the initial effort to consider.

Given the potential pitfalls of a single tool approach, I usually advise users I work with to consider a careful balance of general purpose and purposeful solutions as a way to fulfill their enterprise cloud strategy. While purposeful solutions do not deliver the flexibility of its general purpose brethren, they typically provide much more out-of-the-box value. The design and implementation of these solutions focuses on handling a subset of use cases really well, thus severely reducing the time and cost of addressing those particular needs. Many of the assets users would otherwise build and maintain themselves, come embedded in the solution.

There is on big caveat to a blended approach. The integration between all of the different solutions must be easy enough so that the wiring work does not require as much effort as the single tool approach would. This is mainly a challenge to the service provider side of the house. I believe providers, especially those like my employer and others that address multiple different layers of the cloud, are starting to recognize this need. It is one thing to be able to answer an enterprise’s cloud use cases with a set of products. It is quite another to enable an enterprise cloud with a single, well-integrated, consumable solution!

The growing relevance of In-Memory Data Grids

July 12, 2011

The growing consumer affinity to cloud is spurring on various new technological trends. It’s not all new technology mind you, but there seems to be a growing appetite for anything that can remotely be put into the context of cloud computing. In some ways, cloud has been good for bringing previously existing technologies back to the forefront and resulting in needed innovation in the area. Besides virtualization technologies, I believe that one of the best examples of this is in-memory data grid technology.

Despite what some may try to purport, in-memory data grids are not a by-product of the cloud computing revolution. The truth is they existed for a while before cloud, but to be fair, IMDGs probably owe a tip of the hat to the cloud computing craze for bringing them back into the spotlight. Increasingly, we are seeing workloads that are highly scalable, temporal, and elastic making their way into the cloud. These application characteristics often align nicely with the use of IMDGs, so we see renewed interest and quite a bit of innovative activity around these solutions.

I have been spending a lot of time lately working with users that are in the process of defining an evolution to their current enterprise middleware environments. In this work, cloud and IMDG technology has been front and center. My last few posts have been dedicated to talking about some of the cloud trends I have seen during the course of this work, but today I want to focus on IMDG solutions. Specifically, I want to share with you what I’m seeing in terms of how users are currently looking to use this technology, and provide my own thoughts about possible usage scenarios going forward. Let’s start with the common, currently targeted usage scenarios:

1) IMDG as a database buffer: This is perhaps the single most common use case. Here users look to front traditional data stores with a distributed IMDG. This can serve to increase the performance of the application and thereby improve end-user experience by offering faster data access. It can also help to reduce the pressure on the backend database by making the IMDG instance the system of record and periodically batching changes to send off to the database. There are many different techniques for buffering an existing database with an IMDG, but the motivations (increase performance, decrease database reliance, decrease costs) are usually much the same.

2) IMDG as a simple cache: You may hear this referred to as the side cache scenario as well. This usage pattern is a little less intrusive to existing application architecture than #1 above. Here, applications receive an incoming request that requires data, and they first check in the simple cache instance to see if the data exists there. If not, the application proceeds to retrieve the data normally, typically from a relational database system, and then inserts this data into the simple cache. Obviously, when the application finds data in the simple cache, you reduce the path of the application and thus decrease overall response time. This is an especially prevalent pattern for storing conversational state (think HTTP sessions) for applications.

3) IMDG as a service request cache: This is really a variant of #2 above, but I call it out separately because users typically implement it at a different tier in the application architecture. Instead of updating the application to be aware of some IMDG for a simple cache, users insert this awareness further up the stream, often at the ESB tier. Requests come into the application environment, but before they hit the application, some component dissects the request to determine if it can fulfill it from the cache. If so, the entire path becomes significantly shorter, and if not the mediation component inserts the response in the simple cache on the way back out. Again, it is not very different than #2, but in my opinion it is worth distinguishing as it occurs at a completely different tier in the overall architecture.

Those are three common patterns that I run into quite frequently today with IMDG technology. There are a couple of more usage patterns that I have seen once or twice, but are not yet prevalent. ‘Yet’ is the key word here as I tend to think we will be seeing more of these use cases in the near future:

1) IMDG as an event filter: To be clear here, I am not suggesting that IMDG instances would morph into event processors. There are completely separate solutions that do that very well. What I am talking about is using the distributed logic processing capabilities shipped with most IMDG solutions to quickly determine if a given event needs further processing. In this way, events that occur in the enterprise can flow through an easily scalable IMDG filter, and then only if necessary sent to an event processor for more expensive computation. In the increasingly event-driven architectures that are emerging today, I feel this could become a popular pattern.

2) IMDG as a map/reduce engine: Many IMDG solutions deliver the capability to distribute logic, as mentioned in #1 above, and many offer a map/reduce model as a means to do this. As skills on map/reduce programming start to permeate enterprise IT shops, I think users will see compelling use cases for IMDG built primarily on this methodology. The ability to quickly distribute logic, calculate results, and further refine those results all out in the grid is a powerful tool to have in your arsenal. It can in fact be quite liberating to leverage the processing power of a distributed grid to solve important yet complex business problems.

Now, I have no idea if the two use cases above will in fact pan out as mainstream in the near future. I see massive potential there because of their alignment with emerging architectures and their ability to deliver real business value.  However, I can also imagine IMDG technology taking off in entirely different directions. Whatever the near future holds though, I think one thing is certain: We are just beginning to explore the art of the possible when it comes to IMDG technology.

The convergence of IaaS and PaaS

July 6, 2011

I would venture a guess that many cloud service providers are happy with cloud conversations going on in enterprises today. I say this because, at least in my experience, enterprises are truly starting to seek out and embrace the idea of PaaS. Many times these enterprises have adopted or are adopting an IaaS approach, and they are looking to push the cloud up the stack. They want to address their application platforms and applications. This is refreshing and exciting, but also extremely challenging. Why, you ask? It is challenging because this is leading to a convergence of IaaS and PaaS in the enterprise that will test both providers and consumers on cultural, procedural, and technical fronts.

Empirical data, and common sense, seem to suggest that many enterprises start their cloud journey by evaluating and possibly adopting IaaS solutions. The primary units of interest in this phase are servers, storage, network components, operating systems, and other parts of the base IT infrastructure. Fittingly, the target audience is usually the various infrastructure teams in the enterprise. If you get in a room with these teams and ask them what their cloud service or application is, they will likely tell you that it is a provisioned operating system.

This is a markedly different view than the target audience of PaaS discussions.  Middleware and application teams look at the cloud in terms of provisioning applications and application platforms. There is an implicit assumption that the base resources will be there. After all, that’s no different than the assumption they make in the non-cloud world. If you get these teams in a room and ask them what their cloud service or application is, they will tell you it is the application platform and application that runs on that platform.

These are completely different points of view on the benefits and expectations of cloud. Infrastructure teams look at IaaS and see that it solves many of their problems. Middleware teams look at IaaS and see the benefits of getting a server really fast, but also realize they still have to do a lot of work on top of that server, thus they turn to PaaS. Quickly, enterprises become aware that they need both and start to explore how they converge their IaaS and PaaS work.

There is no way to sugarcoat this: Adopting a converged/integrated approach to IaaS and PaaS will not be easy. As I said in the beginning there are numerous different types of challenges you will encounter. Having said that, it is far from impossible, and I have worked with numerous users that are taking an integrated approach. While there is no silver bullet, I would like to share some observations for those of you who may be pursuing a cohesive IaaS/PaaS strategy:

1) Be wary of the single tool myth:  You may hear from different providers that they have a single tool that can deliver both IaaS and PaaS. I am not deeply knowledgeable of every tool out there, but I would caution you to be very skeptical of any such claim. While the tool may be able to do both, you should carefully judge the level of effort to achieve this. It is likely when you hear this that you are getting a tool primarily oriented towards IaaS, and you achieve PaaS through heavy doses of custom scripting. Further, the expectations for user experience when interacting with IaaS and PaaS tools is significantly different. Avoid the temptation of having a single tool if it is not going to capably address both IaaS and PaaS.

2) A single pane of glass is more reasonable: While a single tool that delivers meaningful IaaS and PaaS is hard to find, a single pane of glass that allows you to manage both is a different proposition. Essentially, I advise users to look for IaaS and PaaS solutions which, when integrated, provide a single pane of glass view of some of the common management actions (deployments, usage and accounting, deprovisioning).  While you may need to individually interact with both the IaaS and PaaS solutions for some things, collapsing the most frequent management needs into a single pane of glass can be hugely beneficial, while still allowing each solution to focus on what it does best.

3) Consume and/or reuse: Ideally, you will be looking at a PaaS solution that can consume the output of the IaaS solution. In practice this is sometimes hard to do because there may be overlap in what the IaaS and PaaS systems do. A common example is that in most cases, both the IaaS and PaaS systems will provision an operating system (the PaaS goes further by laying down software and apps on top of the OS, but that is beside the point). If you are not in a position to easily consume the output of one system from another, then make sure you are reusing assets. Going back to the example of IaaS and PaaS solutions that both provision operating systems, I would suggest to a user in this situation that they have a centrally stored and managed workflow that configures the OS for use. These kinds of techniques significantly reduce management overhead.

I will end my short list here in the interest of brevity, but there are certainly more things you should be on the lookout for when pursuing both IaaS and PaaS solutions. Rather than me blab on though, I am interested in what you have to say! Let me know what you think and how you are approaching a converged IaaS and PaaS story.

Cloud adoption paths

June 22, 2011

I find that it is interesting and sometimes even helpful to sit down and reflect on past experiences. That’s true for life and it’s true for work. In my last post, I reflected on some of the common challenges I have seen in the rollout of enterprise cloud projects. In this post, I want to shift gears a bit and take one giant step back if you will. Let’s talk about the common patters for how organizations are adopting cloud in the first place.

I like to keep things as simple as possible. Remember, just because something is simple does not make it any less true! I like to boil down my characterization of cloud adoption strategy into just two camps: strategic and tactical. You may hear the same kind of thought referred to as bottom-up and top-down approaches. However, I believe those terms usually have a technical connotation, and I do not want to pigeon hole cloud adoption strategy as only a technical discussion — it has far broader reaches than that.

The strategic approach to cloud adoption is usually pretty easy to spot. Organizations going down this road often start by forming a task force, formally or informally, to define what cloud means to the company. The output is usually some sort of cloud steering document or organizational cloud plan that the rest of the company can leverage to guide their cloud usage. This document or plan enables other teams by defining things like acceptable cloud delivery models (public, private, hybrid), preferred service providers, security requirements, characteristics of high value projects, and more. Regardless of what the steering asset contains, the key to identifying this approach is simple. When an organization takes the strategic adoption approach, cloud projects do not proceed until a well-defined, mutually agreed upon cloud blueprint is firmly in place

In sharp contrast to the strategic, measured approach, is the tactical approach. When organizations take the tactical approach, you may often hear that they are dipping their toes in the sea of cloud. This approach means that an organization is tackling one or two big pain points (maybe application development and test) with cloud services. The idea behind the tactical approach is that successful implementation/adoption of cloud services for a handful of high visibility projects can initiate the necessary momentum for enterprise-wide adoption. When taking this approach to cloud adoption, companies tend to pay a lot of attention to gathering value metrics from their initial projects. These metrics become ‘evidence’ for the rest of the company, whether they are good or bad.

Like just about anything else, there is a little gray area here. Some organizations end up pursuing something of a blended approach. In parallel, they work on defining their cloud blueprint and validating it with a few select implementations. As you may surmise, there is no right or wrong way. I tend to observe that the more technically oriented a company is, the more likely they are to go the tactical route. In any case, so long as you are identifying if and where cloud can provide value to your organization, you are making meaningful progress.

So, why is all of this important anyway? Well, I think understanding the different approaches to cloud adoption can help to structure how we, as an industry, align ourselves to help with the movement.  These two approaches require two different types of help. Strategic adoption paths would really benefit from adoption roadmaps, case study data from other implementations, meaningful exploration of cloud delivery models, and more. Tactical approaches would benefit from technical accelerators in the form of services for integration, migration, translation, and more. I do not think you can argue the industry is where it should be in terms of being able to help with either of these adoption paths. It’s time to start listening and observing so we can change that!