Archive for the ‘PaaS’ Category

PaaS deployment models

May 16, 2012

Rapid deployment capability is table stakes when we are talking about a PaaS solution. Every vendor touts it, and to be frank, every user simply expects it to be there. While I think it is interesting to talk about rapid deployment and perhaps compare speed of one solution to that of another, I think it is infinitely more interesting to talk about the mechanics of deployment for a particular solution. That is, I think the more interesting and important question is ‘What deployment style does a particular solution take?’

At a very high, black and white level, I think two primary deployment styles permeate the landscape of PaaS today: contained and referential. I want to compare each approach, but before that, let me use a few words to describe each style.

Contained: In the contained deployment model, PaaS solutions deploy environments based on packages that contain most, if not all, of the desired configuration as well as the logic to apply that configuration. For instance, if a solution were to deploy a virtual image in the contained model, the virtual machine would have the necessary information and logic embedded to configure itself upon start up. It would not necessarily need to contact external systems or wait for instructions from other actors.

Referential: In the referential deployment model, PaaS solutions deploy environments using a minimal base package. At some point during the deployment process, the deployed environment communicates with a third party in some fashion to procure the necessary configuration information. Going back to the example above, if a virtual image were deployed in the referential model, the virtual machine would start up and then communicate with a third party service (either by initiating a request or waiting for instructions). This third party service would send down the configuration information and instructions for the environment hosted within the virtual machine.

When comparing the two approaches, it is helpful to understand the advantages and drawbacks of each. A closer look at the contained model reveals an obvious benefit: speed. In this model, the deployment package contains most of what it will require in order to fully configure itself. It does not rely on contacting an external service and having to pull down the necessary binaries and configuration information.

This advantage comes with an obvious drawback: management burden. By building more and more into the deployment package, you are increasing the amount of content that must be maintained and updated in said package. While it is not a huge concern if you only have a handful of discrete packages, you may not be able to rely on that luxury. You may expect that after some amount of time, the number of permutations to support necessitate spending an inordinate amount of time updating deployment packages. If this is the case, you can easily end up in a situation where the benefits of rapid deployment are negated by increased administrative costs.

The referential approach avoids the above pitfall. In this model, the deployment package remains fairly skeletal. Instead of packing in all of the content like in the contained model, the deployment packages in the referential model know just enough to integrate with an external system to get the right configuration information (think Chef and Puppet). This means that you only need to update and maintain configuration data and configuration actions in a single location instead of in each and every deployment package. As the number of different required environments increase, this approach can mean a significant reduction in management burden.

There is a flip side to this coin of course. The referential approach typically results in longer deployment times – dependent on the time required to install and configure content for your environments of course. Since the deployment packages contain very little content at deploy-time, they must pull or otherwise receive that data at some point during the deployment. This may or may not be a big issue for your particular use case, but it is a potential drawback worth considering.

So which approach is better? It is my opinion, one derived from numerous user experiences, that there is no way to generalize the answer to that question. In cases where content is infrequently updated and the number of environmental permutations is fairly well constrained, the contained deployment model can be extremely effective and efficient. On the other hand, in cases where content is dynamic and ever-changing, the referential deployment model is a virtual requirement. From a user’s standpoint, I strongly suggest pursuing solutions that support both kinds of deployment models. Tools should not dictate your approach. Your requirements for a particular usage scenario should!

Advertisements

Application services in the PaaS ecosystem

February 5, 2012

One of the first things I learned when I started talking with clients about their plans for platform-based clouds is that there is no such thing as a ‘simple application environment.’ Even for the most basic, CRUD-style applications, you can count on there being numerous different components involved in the overall solution. While one may immediately identify application servers, web servers, and databases, it quickly becomes clear that LDAP servers, identify management components, data grids, load balancers, and more are equally important. Indeed, the enterprise application ecosystem is not one where the word ‘simple’ is often tossed about.

The inherent complexity of enterprise application environments has had, and will continue to have a significant impact on the design and implementation of many PaaS solutions. In fact, one could argue that the preeminent design point in many PaaS solutions on the market today is rooted in eliminating this complexity. To do this, PaaS solutions attempt to abstract all the way up to the application level. In other words, users supply application artifacts and possibly define application characteristics. From there, it is the job of the PaaS system to handle the underlying application infrastructure, completely absolving the deployer from having to get into the nitty-gritty of the application serving environment.

There is little arguing that this is a notable goal, but the fact is this level of abstraction is not an easy thing for PaaS providers to implement. PaaS solutions have to make a lot of assumptions about what components should be a part of the cloud-based environment as well as how those components should be configured. The bottom line is providers can only do this for so many components, and with that limitation in mind, they often go after the most common pieces of an application environment. That means things like application servers, databases, messaging engines, and authentication systems are often covered directly by the PaaS solution. While those components are definitely a big part of the overall environment, they are far from being everything that is necessary.

As I mentioned earlier, an enterprise application depends on a number of things beyond the core infrastructure in order to function correctly. Many applications need to cache results, depend on identity management capabilities, require load balancing to scale properly, and integrate with ESBs to name but a few things. PaaS solutions may not be able to address everything in terms of what they can deploy and configure, but that does not mean that they can blissfully ignore the large number of services on which applications typically depend. PaaS solutions must address the need to integrate the application environments it deploys with these application services in some kind of way.

There are at least a couple of ways to do this in a way that minimizes the burden on the PaaS provider. First, a PaaS solution can provide the concept of an application service and enable one to deploy an instance of that service. Of course, we are adding to the deployment and configuration responsibility of the PaaS provider, but this is somewhat mitigated by the fact that it is handled outside the context of any one application instance. The PaaS solution can simply deploy and minimally configure a service like an ESB, and then push back further configuration duties on the application environment that needs to access the service. It is the job of the PaaS solution to slice up the service such that multiple application environments can make use of it, but it is the job of each application to configure its slice of the service for use.

Alternatively, a PaaS solution can introduce the notion of an external application service. In this scenario, the PaaS system does not need to provide any capability in terms of deploying or configuring the application service. All it needs to do is provide a way for application environments to reference the external service.  This service has already been installed and largely configured, and applications deployed by the PaaS solution just need to indicate a dependency on the service. It is a bonus if the PaaS solution can manage to slice up shares to the external service, but that too could be pushed off on a component or actor external to the PaaS platform. To the PaaS provider, this is clearly the easiest of the two approaches to implement, but it does put more burden on the user to make sure those external services are up and accessible for applications deployed by the PaaS system.

So, which approach is better? While it is tempting to say the first approach is the best that is not necessarily true. It is true that it is the simplest for the user that is deploying applications with service dependencies, but it might not always be the best option for the enterprise. In many cases, enterprises have these application services up and running and they want to leverage that investment from the environments created by their PaaS platform. In that scenario, the second option is probably the preferable approach. To put it simply, there is no way to generalize and pick one approach over the other. The way in which a PaaS solution should handle application service dependencies varies, and ideally, it should be able to handle either style. What is clear though is the fact that there must be some way for PaaS systems to provide or integrate with commonly used application services!

It is what’s on the inside that counts

January 16, 2012

“It is what is on the inside that counts.” How many times have we all been told that? Personally, I can’t even remember. However, just because it is perhaps overused and undoubtedly cliché, does not make it any less true. In fact, I rank it right up there with the golden rule as mantras that one should try to live by. But this is ostensibly a cloud computing blog, so what am I rambling on about? Well, it turns out that it’s what is on the side that counts in cloud computing as well!

To explain this, let’s take a look at the process for deploying applications on the cloud today by looking at an example. Consider that I have a basic Java web application that provides an enterprise resource scheduling service. In the normal course of operation, the web application connects to both message queues and databases to service requests. Additionally, a web server usually sits in front of this application and directs incoming requests from clients.

If I want to deploy an application like the above on a cloud, I need a good deal of knowledge about the application. First, I have to know that in order for the application to work I need far more than just an application container. I need a way to deploy or integrate with a web server, message queue, and database. I also need to know how each of the components should be integrated in order to support the needs of the application. Moreover, I need to know quite a bit about the non-functional requirements of the application (security, availability, scalability, etc.), so that I can configure each of the supporting components as appropriate.

While this may not sound like too much to expect, it can be challenging considering that the deployer of an application is unlikely to have insight into each of these needs. In fact, there is no one person in the organization that is likely to have all of these answers. Application deployments are typically the result of significant collaboration that can stretch project timelines beyond what anyone expected. But given the push towards autonomics and automation that cloud is a large part of driving, I have to ask if such intimate knowledge of an application is really required to deploy it?

My honest, yet cautious answer to this has to be no. I mean, why should I have to locate all of the required components for my application before being able to successfully deploying it? Ideally, if I am using an application-oriented cloud management system like I would expect to find in PaaS, it should be capable of application introspection to make many of these decisions for me. If you go back to my earlier example, the PaaS solution should be able to look into my web application and determine via application metadata, or even source code, that there is a dependency on both a database and a message queue. Additionally, if it notices that there is service addressable via HTTP, I would expect to at least be prompted to see if I wanted to include a web server in the application deployment. Ultimately, the introspection of the application should result in a proposed deployment architecture for my application that includes both the necessary components and the required integration between those components.

On top of all of this, I would expect that a number of non-functional configuration items could be determined via application introspection as well. If metadata in my web application indicates that I am using basic authentication to protect my enterprise scheduling service, the PaaS solution should be able to configure the application container environment appropriately. Even more interesting would be the ability to specify metadata in my web application that indicated scaling requirements and have the cloud management system automatically act on that!

Is this really too much to expect? I do not believe so. Many talk about the very nature of PaaS being one rooted in application orientation. Well, I think you can only get so far down that road without putting forth solutions that possess an inherent awareness of the applications that are being deployed. In other words, it is what is on the inside of the application that counts and PaaS solutions need to know that!

PaaS and the application server

November 10, 2011

For the last 5+ years, I have been deeply focused in the area of middleware application servers and the container services that they provide. If you go a bit further back and look at the arc of progression in this space, you will see multiple pivots of innovation for these servers. In the beginning, it was all about transactions, and then JEE was added, followed closely (and probably most recently) by a push around SOA. While none of these topics have gotten less important over time, they become an assumed part of application server runtimes and architectures.  That is, customers will not consider an application platform that does not at least support those technologies and paradigms.

Does that mean that the application server space is dying down in terms of innovation and change? Not at all in my opinion. Just like JEE was there to push transactions and SOA was there to extend the innovation around JEE, a host of other hotspots are appearing that are demanding inventive changes in the application server space. One of those is the notion that an application server in and of itself is not the unit of purchase and use any longer. Instead, users want entire platforms that not only provide a robust container for their applications, but also surround those applications with other capabilities and qualities of service. This is definitely an interesting movement, and one that is leading to advances in the state of the art concerning elastic caching, policy-based application management, plugin driven architectures, and more. As intriguing as this notion is, I actually think there is something that will soon push traditional application server runtimes even further: PaaS.

A cursory look at the PaaS industry will quickly reveal that we are at the forefront — I mean the very forefront — of a potentially thunderous shift in the enterprise application space. Initially there has been a large focus on making it fast and simple to provision and manage applications on top of traditional middleware stacks. Of course this cannot and will not be the focus forever. At some point, the gaze of the industry will shift, and we will start to fundamentally rethink what that middleware stack is, what services it delivers, and how it functions. What does this mean? Well, I am sure that many of you have your own ideas, but allow me to offer up a few of mine as food for thought:

Application servers and the kernel blend: There has been a lot of talk about the commoditization of the operating system, but I can tell you from first-hand experience that much of that has been premature. That said, as we move to more PaaS-like models, the OS and basic application server will blend together much more. Application servers will be extremely lightweight, and provide basic container support for your applications. Beyond that, I believe we will see a pull model based on need… which leads me to my next thought.

Application servers become composable: In the cloud we want speed. Well, with traditional application servers, you can only push the limits so far. Sure, you can pre-install, pre-configure, etc., but you still have to deal with the unique configuration for each instance. In some cases this takes a while simply because there are tons of features to configure. What’s more is that some of these features are not actively used. In this light, I think PaaS will force application servers to take on more of a fit-for-purpose model. Features and components will be pulled in when they are needed and not a moment before. Further, those same features will be turned off when they are not actively consumed. This has benefits beyond speed of course, namely a consumption footprint more appropriately aligned with use.

Application virtualization becomes a core competency: We can have esoteric arguments about whether cloud computing requires virtualization, but what is the point? Virtualization is elemental to nearly all cloud-based solutions (especially at the IaaS/PaaS layer). That said, most of the virtualization innovation has been focused on server virtualization and management. Going forward, those same concepts will be pushed up to the application. The capability to create virtualized pools of application instances, supporting extremely high densities and reducing management burdens of large environments will become the norm. In fact, those capabilities will become a core part of the application container.

These are but a few of the innovation areas I anticipate that PaaS will drive into the middleware application server space. You may think these are wildly off base, and you may have your own ideas about what PaaS is going to drive into this area. Whether you agree or vehemently disagree, I would like to hear what you have to say!

It’s about services… right?

October 20, 2011

A lot of the initial focus in cloud has been on environment deployment. Specifically, it seems like we spend quite a bit of time talking about how cloud building blocks such as virtualization, automation, and standardization will lead to the kind of nimbleness and efficiency previously very hard to attain. While these are all valid and very important points, I think it’s time we push the conversation forward concerning what elements of cloud users should expect to derive value from — especially in the PaaS arena.

I think we can all agree that users leveraging PaaS solutions will benefit from the standard qualities we have associated with cloud for the past few years. In other words, there is certainly nothing about PaaS that jeopardizes rapid deployments, better asset utilization, improved consistency in deployed environments, etc. It is important to recognize though, that the value of PaaS goes quite beyond these table stakes benefits. PaaS has the potential to be transformative in nature, and much of that potential comes down to application services.

Even though application services are a core part of the value proposition of PaaS, they are not talked about much. So what are they? Well, PaaS implies that the platform provides some set of embedded services to your applications. Applications may exploit those services programmatically, or those services may be a crucial element in ensuring the quality of service requirements expressed by an application.  As an example, a PaaS solution may provide an embedded cache service that your application can use to store important contextual state. Another example would be a service that scaled your application in and out to meet needs. In any case, I contend that the application services provided by a given PaaS solution ultimately determine its long-term viability and potential value to users.

Given my belief in the crucial role of application services in a PaaS solution, I also have some thoughts on important considerations in the delivery of such services. For those of you considering PaaS solutions, I would suggest you examine the set of application services delivered by each platform and take into account at least some of these considerations:

–  Service transparency: The configuration and management of application services provided by the PaaS solution should be completely transparent to the consumer of the service. For example, if I am deploying an application that intends to make use of a platform-provided cache service, I should not have to do anything other than indicate that my application relies on the caching service. The platform should handle configuring the cache service and managing it over time to meet the needs of my application. This is a key facet of moving to a more declarative environment creation and deployment model. After all, it’s not much of a service if consumers have to define and manage it!

Service sharing: As much as possible, the PaaS solution should have the means to share services among multiple applications without sacrificing quality or security. If a platform runs a scaling service that is capable of managing and initiating scale in and scale out events for applications, it would probably be a waste of resources to have to deploy an instance of that service for every application deployment. Instead, a single logical instance of the scaling service should be able to accommodate a reasonably sized set of application deployments. After all, cloud is about using only what you need when you need it.

Service governance: While the set of services provided by a PaaS solution should be as transparent as possible to the consumers (application owners), that does not mean that someone does not need a lens through which to watch these services. Authorized users should be able to manage and monitor service resource consumption, manage service exposure, and meter service consumption amongst other things. Another important consideration in this regard is the ability of the PaaS solution to attribute service usage to different users or groups of users. Without this, you will have a significant hole in your cost allocation or chargeback model.

While these certainly do not represent everything you should look for in PaaS application services, I do believe they are among the three more important qualities.  We are definitely early on in the PaaS delivery and adoption movement, but I believe we can already see the importance of application services for these solutions. Many platforms are already offering a basic set of services, and over time I would expect a normalization of core application services to appear among the varying PaaS solutions. I will be watching, and I am excited to see and hopefully be a part of the coming innovation!

The value of workload-aware management

September 23, 2011

A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.

Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:

Me: IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?

Marc: To sum it up, we offer a combination of depth and breadth.  IWD delivers both workload aware management and general purpose management.  Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context.  There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools.  This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity.  By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.

That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun.  As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer’s entire environment, we offer a mix of workload-aware management and general purpose management.

Me: VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?

Marc: I think VMware has built a very compelling set of capability in the virtualization space.  I think the main difference between VMware’s suite and IBM Workload Deployer is the perspective from which the environments are managed.  VMware puts the administrator in the position of thinking about infrastructure from the ground up.  The administrator is thinking about virtual images, hypervisors, and scripts.  In IBM Workload Deployer, we think about things from the perspective of the app, because that’s ultimately what the business cares about.  By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.

Me: The ‘one tool to do it all’ approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?

Marc: The advantages of a “one tool to do it all” are many: less integration, more uniformity, less complexity.  As such, customers will always prefer a single tool when possible.  This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the “everything else.”  As such, my advice to users is not to choose between breadth and depth – use IBM Workload Deployer which offers both.

Me: To close, I’m curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?

Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm.  There are several reasons for this.  One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged.  Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models.  However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data.  Just look at IBM’s Strategic Outsourcing business, which handles entire IT operations for many large businesses.  I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption.  In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public.  When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.

Me: Thanks Marc!

What is an application?

September 8, 2011

What is an application? Well, like many things the answer varies widely based on who you ask. Consider the case of an online ticket booking application. If you ask the users of the application what the application is, then it’s a web page or a series of web pages. If you ask the application development team, they will say it’s a collection of source code, configuration files, static content, etc. The infrastructure team will tell you the application is the application server, web server, and operating system. If you go further and ask the hardware folks what they think, they will likely say the application is the virtualization platform running on their server. As you see, a question seemingly innocuous as ‘What is an application?’ is actually anything but.

Now, you may say, ‘Who cares?!? Is it important to agree on what an application is?’ In general, I would say no. In my mind, it is not all that important that different facets of an organization agree on what they consider an application to be. As long as they can communicate their view point to others, everything should proceed reasonably smoothly. However, in the world of PaaS, I believe that an agreement on what the term application means is of paramount importance.

I have discussed the many different approaches PaaS providers can take concerning the delivery of their product. I think the particular approach one chooses is a critical choice, but a look at the bigger picture reveals a larger set of challenges. The move to a hybrid cloud approach is a shift that seems inevitable. This shift to hybrid clouds will encounter many issues, and the shift to hybrid PaaS environments will not be exempt. From a technical standpoint, one of the key obstacles that hybrid cloud computing needs to confront is the need for meaningful commonality of management. We all know that the key unit of management depends on the type of cloud (IaaS vs. PaaS vs. SaaS), and in the context of PaaS the application is the focal point.

This brings us back to my original point of defining an application. How can a hybrid cloud platform for PaaS provide meaningful commonality of management if there is no general consensus on what constitutes an application? The answer is simple: it can’t.

Let’s consider a simple example to put some concreteness around this. I have a hybrid PaaS platform, and I log into the management console to manage an application that is potentially running in multiple cloud environments. I decide that I want to attach a policy to that application that describes scaling behavior – perhaps based on response time. There are two things that can happen at that point. Either they hybrid cloud platform takes this directive from me and translates it for each and every cloud provider, or the hybrid cloud platform sends this information in a manner every provider can interpret and enforce.

I think, or hope, that we would all agree the latter option is more tenable. If a hybrid PaaS tool can only give meaningful management if it knows lots of details about each underlying cloud platform, well, we won’t have many hybrid PaaS tools. That said, there is no way for the latter option to occur in the absence of some sort of standard. That includes a standard (I don’t mean standard in the literal IT sense necessarily) of what an application entails. After all, how could you enforce a scaling policy on an application if you cannot determine what an application is??

Considering this, what would you say an application really is? Is it a grouping of HTTP URIs? This seems to be the easiest way to describe an application but would be lacking in many scenarios (EJB applications to name but one). I wager that if you really sit down and think about a common way to describe an application you will end up debating with yourself! I think the issue of defining an application is critical toward the ongoing success of PaaS, and I also think it serves as a reality check for hybrid clouds in general. While there is definitely a larger shift toward the hybrid approach, we clearly have a long way to go.

Revisiting PaaS delivery models

August 23, 2011

Quite some time back, I explored the idea of different delivery models for PaaS solutions. To sum it up concisely, I believe that PaaS providers have to make a decision when they deliver their solution. They have to decide the degree to which their solution supplies inherent knowledge of a set of platforms versus the flexibility that solution delivers. With that in mind, I proposed that we can generally characterize PaaS solutions in the following way:

1) Platform and service depth with little breadth

2) Platform and service breadth with little depth

3) Platform and service depth with breadth

When examining option one or option two, many consumers will feel that they are making a compromise, and not without reason. Option one delivers a PaaS solution that provides significant inherent knowledge about a particular platform or group of platforms at the expense of being well-rounded. This means that users can setup some cloud platforms very easily and with little effort, but they cannot address all of the platforms necessary in their organization. Option two is just the opposite. It does not supply much platform specific knowledge at all, but rather it is flexible enough to address a large swathe of runtimes. Of course, this means users end up being the supplier of the platform specific knowledge, so adoption and implementation typically take significantly longer.

Given all of this, in my original post I was of the opinion that option three is the most appealing. I still believe that today. PaaS solutions are plenty disruptive on the cultural front, so minimizing technical hurdles can go a long way towards spurring on adoption. If users can get a PaaS solution that lets them address a significant subset of their organization’s platforms with minimal technical work, coupled with the ability to address other platforms with some customization work, I think they will naturally gravitate that way.

Unfortunately, the reality is that delivering such a solution is not an easy thing to do. From a provider standpoint, it is easy to fall into a trap whereby you unwittingly build a solution rooted in the context of just a select few platforms. This could surface itself in management interfaces, deployment models, and user interface terminology just to name a few places. Many times this happens when you build the PaaS solution to address a specific subset of platforms, and then work backwards to make it an open system. This just does not work. As a provider, you have to build the open platform, and then provide the content necessary that customizes the system for the subset of platforms you choose.

This is where I really think IBM Workload Deployer gets it right. From day one, the subsystem that deals with virtual application patterns has been based on an open architecture. Every virtual application pattern is a collection of one or more plugins that provide the knowledge of how to install, deploy, and manage application environments in a cloud. IBM provides virtual application patterns (built on provided plugins) right out of the box for a selected set of runtime environments. Users can create virtual application patterns for other platforms they want to deploy and manage using IBM Workload Deployer by contributing the foundational elements — plugins.

The ability to create and load custom plugins into IBM Workload Deployer has been a feature of the solution since its announcement earlier in the year. So, why am I just bringing this up? Well, last week IBM published the IBM Workload Plugin Development Kit that provides tooling and documentation that guide users through the plugin creation process. The tools and guidance included in this kit provide the framework within which users will be able to create any kind of plugin to support any type of virtual application pattern they want to create. This kind of tooling makes it feasible to have a PaaS solution that provides both depth (via the patterns and plugins provided by IBM) and depth (via the patterns and plugins provided by users).

My advice to you is that regardless of the solution(s) you are looking at, challenge your cloud providers in terms of the value they deliver and the flexibility they enable. Look for those that give you out-of-the-box value without handcuffing you to a particular set of platforms. Time to value does not have to come at the complete expense of flexibility!

An eye on the competition

August 9, 2011

When it comes to IBM Workload Deployer, I have no illusions regarding the veracity of our competitors. They are out there, and they are constantly on the attack. Their dubious claims aside, I know this because I still get asked quite frequently to explain the benefits of IBM Workload Deployer versus some other general purpose cloud provisioning and management solution. So, while I have done that many times in various forums, I figured it was time to yet again address this question.

 

When comparing IBM Workload Deployer to the other available solutions, I honestly feel comfortable saying we have no direct competition. I know you believe me to be biased, and rightly so, but let me explain why I think the competition is much more perception than reality. To do this, I want to focus on the patterns-based approach that IBM Workload Deployer takes to cloud provisioning and management.

 

Let’s start with virtual system patterns in IBM Workload Deployer. Virtual system patterns allow you to build and deploy completely configured and integrated middleware environments as a single unit. These patterns build on top of our special IBM Hypervisor Edition images that bottle up the installation and quite a bit of the configuration of the underlying middleware products. Further, when using virtual system patterns, IBM Workload Deployer manages and automates the orchestration of the integration tasks that need to happen to setup a meaningful middleware environment. For instance, when deploying WebSphere Application Server you do not need to do anything on your end to deploy a clustered, highly available environment. When deploying WebSphere Process Server in this manner, you do not need to take any administrative actions to produce a golden topology. You just deploy patterns and the images, patterns, and appliance take care of the rest. Of course, you can add your own customizations and tweaks in the pattern, but we take care of the common administrative actions that would otherwise require your care.

 

I am not sure of a better way to say it, so I will be blunt: When deploying products delivered in IBM Hypervisor Edition form, no other solution compares to the virtual system pattern capability offered by IBM Workload Deployer. It is not even close. Can you provision products like WebSphere Application Server or WebSphere Portal using other cloud provisioning tools? Sure, but you should be aware that you will be writing and maintaining your own installation, configuration, and integration scripts. It is also likely that you will end up developing a custom interface through which deployers request your services (something not necessary when using the rich IBM Workload Deployer UI). All of this takes time, resource, and money. More importantly, this is not differentiating work and distracts from the real end goal: serving up applications. IBM Workload Deployer can deliver this operational capability right out of the box, and it can do so in a way that costs less than custom developed and maintained solutions.

 

When considering IBM Workload Deployer versus the competition, it is also important to consider the new virtual application pattern capability delivered in version 3.0. The virtual application pattern capability is a testament to IBM’s thought leadership in, and commitment to cloud computing for middleware application environments. Virtual application patterns take a bold step forward in raising the level of abstraction beyond the middleware environment and up to the most important resource in enterprise environments: the application. With a virtual application pattern, you simply provide your application and specify both functional and non-functional requirements for that application. When ready, you deploy that pattern, and IBM Workload Deployer sets up the necessary middleware infrastructure and deploys the provided application. Moreover, the appliance will monitor and autonomically manage the environment (i.e. scale it up and down) based on the policies you specify. Quite simply, this is a deployment and management capability our competition cannot match.

 

There is more to consider than just patterns though. The appliance makes it really simple to apply maintenance and upgrades to environments running in your cloud. It can autonomically manage your deployed environments (through policies in virtual application patterns and the Intelligent Management Pack for virtual system patterns), and it effectively abstracts the underlying infrastructure of your cloud environment. This abstraction is the reason IBM Workload Deployer can deploy your environments to PowerVM, zVM, and VMware environments. It also makes it easy to deploy the same environment to multiple different underlying platforms, thus accommodating typical platform changes that happen as an application moves from development to production. The best part of all is that the deployer’s experience is the same regardless of the underlying infrastructure since the appliance hides any platform idiosyncrasies.

 

The bottom line is that the appliance is purpose built to deploy and manage middleware and middleware application environments in a cloud, and as such, delivers immense out-of-the-box and ongoing value in this context. I should also point out that the design of the appliance acknowledges its purposeful nature. The CLI and REST API interfaces allow you to integrate the appliance into the operations of those general purpose provisioning solutions. In this way, IBM Workload Deployer acts as a middleware accelerator for your cloud computing efforts. This means that if you do have a general purpose solution, IBM Workload Deployer can still provide considerable value and let you avoid developing a considerable subsystem dedicated to deployment and management of middleware in the cloud. We believe in this type of integration, and have in fact built it into our own IBM solutions.

 

There is certainly more to IBM Workload Deployer and its differentiating value, but I think the above is a good start. When it comes down to creating clouds focused on middleware platforms and middleware applications, nothing stacks up to IBM Workload Deployer.

The convergence of IaaS and PaaS

July 6, 2011

I would venture a guess that many cloud service providers are happy with cloud conversations going on in enterprises today. I say this because, at least in my experience, enterprises are truly starting to seek out and embrace the idea of PaaS. Many times these enterprises have adopted or are adopting an IaaS approach, and they are looking to push the cloud up the stack. They want to address their application platforms and applications. This is refreshing and exciting, but also extremely challenging. Why, you ask? It is challenging because this is leading to a convergence of IaaS and PaaS in the enterprise that will test both providers and consumers on cultural, procedural, and technical fronts.

Empirical data, and common sense, seem to suggest that many enterprises start their cloud journey by evaluating and possibly adopting IaaS solutions. The primary units of interest in this phase are servers, storage, network components, operating systems, and other parts of the base IT infrastructure. Fittingly, the target audience is usually the various infrastructure teams in the enterprise. If you get in a room with these teams and ask them what their cloud service or application is, they will likely tell you that it is a provisioned operating system.

This is a markedly different view than the target audience of PaaS discussions.  Middleware and application teams look at the cloud in terms of provisioning applications and application platforms. There is an implicit assumption that the base resources will be there. After all, that’s no different than the assumption they make in the non-cloud world. If you get these teams in a room and ask them what their cloud service or application is, they will tell you it is the application platform and application that runs on that platform.

These are completely different points of view on the benefits and expectations of cloud. Infrastructure teams look at IaaS and see that it solves many of their problems. Middleware teams look at IaaS and see the benefits of getting a server really fast, but also realize they still have to do a lot of work on top of that server, thus they turn to PaaS. Quickly, enterprises become aware that they need both and start to explore how they converge their IaaS and PaaS work.

There is no way to sugarcoat this: Adopting a converged/integrated approach to IaaS and PaaS will not be easy. As I said in the beginning there are numerous different types of challenges you will encounter. Having said that, it is far from impossible, and I have worked with numerous users that are taking an integrated approach. While there is no silver bullet, I would like to share some observations for those of you who may be pursuing a cohesive IaaS/PaaS strategy:

1) Be wary of the single tool myth:  You may hear from different providers that they have a single tool that can deliver both IaaS and PaaS. I am not deeply knowledgeable of every tool out there, but I would caution you to be very skeptical of any such claim. While the tool may be able to do both, you should carefully judge the level of effort to achieve this. It is likely when you hear this that you are getting a tool primarily oriented towards IaaS, and you achieve PaaS through heavy doses of custom scripting. Further, the expectations for user experience when interacting with IaaS and PaaS tools is significantly different. Avoid the temptation of having a single tool if it is not going to capably address both IaaS and PaaS.

2) A single pane of glass is more reasonable: While a single tool that delivers meaningful IaaS and PaaS is hard to find, a single pane of glass that allows you to manage both is a different proposition. Essentially, I advise users to look for IaaS and PaaS solutions which, when integrated, provide a single pane of glass view of some of the common management actions (deployments, usage and accounting, deprovisioning).  While you may need to individually interact with both the IaaS and PaaS solutions for some things, collapsing the most frequent management needs into a single pane of glass can be hugely beneficial, while still allowing each solution to focus on what it does best.

3) Consume and/or reuse: Ideally, you will be looking at a PaaS solution that can consume the output of the IaaS solution. In practice this is sometimes hard to do because there may be overlap in what the IaaS and PaaS systems do. A common example is that in most cases, both the IaaS and PaaS systems will provision an operating system (the PaaS goes further by laying down software and apps on top of the OS, but that is beside the point). If you are not in a position to easily consume the output of one system from another, then make sure you are reusing assets. Going back to the example of IaaS and PaaS solutions that both provision operating systems, I would suggest to a user in this situation that they have a centrally stored and managed workflow that configures the OS for use. These kinds of techniques significantly reduce management overhead.

I will end my short list here in the interest of brevity, but there are certainly more things you should be on the lookout for when pursuing both IaaS and PaaS solutions. Rather than me blab on though, I am interested in what you have to say! Let me know what you think and how you are approaching a converged IaaS and PaaS story.