Archive for the ‘cloud’ Category

Application services in the PaaS ecosystem

February 5, 2012

One of the first things I learned when I started talking with clients about their plans for platform-based clouds is that there is no such thing as a ‘simple application environment.’ Even for the most basic, CRUD-style applications, you can count on there being numerous different components involved in the overall solution. While one may immediately identify application servers, web servers, and databases, it quickly becomes clear that LDAP servers, identify management components, data grids, load balancers, and more are equally important. Indeed, the enterprise application ecosystem is not one where the word ‘simple’ is often tossed about.

The inherent complexity of enterprise application environments has had, and will continue to have a significant impact on the design and implementation of many PaaS solutions. In fact, one could argue that the preeminent design point in many PaaS solutions on the market today is rooted in eliminating this complexity. To do this, PaaS solutions attempt to abstract all the way up to the application level. In other words, users supply application artifacts and possibly define application characteristics. From there, it is the job of the PaaS system to handle the underlying application infrastructure, completely absolving the deployer from having to get into the nitty-gritty of the application serving environment.

There is little arguing that this is a notable goal, but the fact is this level of abstraction is not an easy thing for PaaS providers to implement. PaaS solutions have to make a lot of assumptions about what components should be a part of the cloud-based environment as well as how those components should be configured. The bottom line is providers can only do this for so many components, and with that limitation in mind, they often go after the most common pieces of an application environment. That means things like application servers, databases, messaging engines, and authentication systems are often covered directly by the PaaS solution. While those components are definitely a big part of the overall environment, they are far from being everything that is necessary.

As I mentioned earlier, an enterprise application depends on a number of things beyond the core infrastructure in order to function correctly. Many applications need to cache results, depend on identity management capabilities, require load balancing to scale properly, and integrate with ESBs to name but a few things. PaaS solutions may not be able to address everything in terms of what they can deploy and configure, but that does not mean that they can blissfully ignore the large number of services on which applications typically depend. PaaS solutions must address the need to integrate the application environments it deploys with these application services in some kind of way.

There are at least a couple of ways to do this in a way that minimizes the burden on the PaaS provider. First, a PaaS solution can provide the concept of an application service and enable one to deploy an instance of that service. Of course, we are adding to the deployment and configuration responsibility of the PaaS provider, but this is somewhat mitigated by the fact that it is handled outside the context of any one application instance. The PaaS solution can simply deploy and minimally configure a service like an ESB, and then push back further configuration duties on the application environment that needs to access the service. It is the job of the PaaS solution to slice up the service such that multiple application environments can make use of it, but it is the job of each application to configure its slice of the service for use.

Alternatively, a PaaS solution can introduce the notion of an external application service. In this scenario, the PaaS system does not need to provide any capability in terms of deploying or configuring the application service. All it needs to do is provide a way for application environments to reference the external service.  This service has already been installed and largely configured, and applications deployed by the PaaS solution just need to indicate a dependency on the service. It is a bonus if the PaaS solution can manage to slice up shares to the external service, but that too could be pushed off on a component or actor external to the PaaS platform. To the PaaS provider, this is clearly the easiest of the two approaches to implement, but it does put more burden on the user to make sure those external services are up and accessible for applications deployed by the PaaS system.

So, which approach is better? While it is tempting to say the first approach is the best that is not necessarily true. It is true that it is the simplest for the user that is deploying applications with service dependencies, but it might not always be the best option for the enterprise. In many cases, enterprises have these application services up and running and they want to leverage that investment from the environments created by their PaaS platform. In that scenario, the second option is probably the preferable approach. To put it simply, there is no way to generalize and pick one approach over the other. The way in which a PaaS solution should handle application service dependencies varies, and ideally, it should be able to handle either style. What is clear though is the fact that there must be some way for PaaS systems to provide or integrate with commonly used application services!

It is what’s on the inside that counts

January 16, 2012

“It is what is on the inside that counts.” How many times have we all been told that? Personally, I can’t even remember. However, just because it is perhaps overused and undoubtedly cliché, does not make it any less true. In fact, I rank it right up there with the golden rule as mantras that one should try to live by. But this is ostensibly a cloud computing blog, so what am I rambling on about? Well, it turns out that it’s what is on the side that counts in cloud computing as well!

To explain this, let’s take a look at the process for deploying applications on the cloud today by looking at an example. Consider that I have a basic Java web application that provides an enterprise resource scheduling service. In the normal course of operation, the web application connects to both message queues and databases to service requests. Additionally, a web server usually sits in front of this application and directs incoming requests from clients.

If I want to deploy an application like the above on a cloud, I need a good deal of knowledge about the application. First, I have to know that in order for the application to work I need far more than just an application container. I need a way to deploy or integrate with a web server, message queue, and database. I also need to know how each of the components should be integrated in order to support the needs of the application. Moreover, I need to know quite a bit about the non-functional requirements of the application (security, availability, scalability, etc.), so that I can configure each of the supporting components as appropriate.

While this may not sound like too much to expect, it can be challenging considering that the deployer of an application is unlikely to have insight into each of these needs. In fact, there is no one person in the organization that is likely to have all of these answers. Application deployments are typically the result of significant collaboration that can stretch project timelines beyond what anyone expected. But given the push towards autonomics and automation that cloud is a large part of driving, I have to ask if such intimate knowledge of an application is really required to deploy it?

My honest, yet cautious answer to this has to be no. I mean, why should I have to locate all of the required components for my application before being able to successfully deploying it? Ideally, if I am using an application-oriented cloud management system like I would expect to find in PaaS, it should be capable of application introspection to make many of these decisions for me. If you go back to my earlier example, the PaaS solution should be able to look into my web application and determine via application metadata, or even source code, that there is a dependency on both a database and a message queue. Additionally, if it notices that there is service addressable via HTTP, I would expect to at least be prompted to see if I wanted to include a web server in the application deployment. Ultimately, the introspection of the application should result in a proposed deployment architecture for my application that includes both the necessary components and the required integration between those components.

On top of all of this, I would expect that a number of non-functional configuration items could be determined via application introspection as well. If metadata in my web application indicates that I am using basic authentication to protect my enterprise scheduling service, the PaaS solution should be able to configure the application container environment appropriately. Even more interesting would be the ability to specify metadata in my web application that indicated scaling requirements and have the cloud management system automatically act on that!

Is this really too much to expect? I do not believe so. Many talk about the very nature of PaaS being one rooted in application orientation. Well, I think you can only get so far down that road without putting forth solutions that possess an inherent awareness of the applications that are being deployed. In other words, it is what is on the inside of the application that counts and PaaS solutions need to know that!

It’s a bottom up world

December 18, 2011

I was talking to some colleagues the other day about the happenings in the world of cloud. Specifically, we were talking about a few different enterprises we worked with that had recently adopted cloud computing. As we talked, we seemed to hone in on the adoption path. Specifically, what group within the companies we worked with had been most responsible for the cloud adoption decision? Was it a group typically responsible for enterprise solution adoption, or were we seeing a fundamental change in how these types of companies made purchases? As we began to unravel the thread that led to the choosing of the particular cloud solution in each case, everything pointed back to a common group: the developers.

I am sure many of you read that, and think, ‘No kidding. Developers are the decision makers now.’ While I agree that it has been clear for some time now that developers play a significant role in the usage and evolution of cloud computing, that is a completely separate notion than saying developers are leading adoption decisions regarding cloud computing. That said, I believe the evidence is incontrovertible and can be found far beyond the handful of use cases that me and my colleagues were discussing. I would wager that if you asked ten companies that were working with cloud computing what group was the strongest influencer in the adoption of their cloud solution of choice, eight would say developers. It is simply the way of the cloud computing world. With that in mind, let’s examine some cloud solution characteristics that are important if service providers want to engage enterprise developers.

The first and perhaps most important aspect in delivering a developer friendly cloud solution is quite simple. It’s the APIs and interfaces! I don’t care what kind of cloud solution you are delivering or where it is classified in the delivery spectrum (IaaS, PaaS, and SaaS), you better make your solution ‘callable.’ This is extremely important for developers because it enables so many things. It allows them to automate the use of the cloud solution. It allows their tools to communicate with the cloud solution. It allows them to mix in the cloud solution with their existing development processes, such as continuous integration testing of their application. Without APIs and interfaces, your cloud solution won’t attract many developers and ultimately it won’t attract many users.

In addition to APIs and interfaces, consumability is another important aspect of developer-friendly cloud solutions. Now, you may say that consumability is important regardless of the group in question, and I would agree. However, I believe it is even more of an imperative with developers. I mean, have you tried being an enterprise developer lately? You have to keep up with existing and emerging programming models, evolving application runtimes, new development tooling and more. Further, you have to understand all of these capabilities in the context of the business. After all, it does no good to know all of this if you cannot put it to work for your company. Therefore, I would encourage cloud service providers to keep it as simple as possible. Complexity should be the exception and not the rule!

Finally, for a cloud solution to attract a development following, it must be largely open and interoperable. If you follow the buzz around cloud, you probably hear many analysts and other independent observers call for openness to assuage any enterprise concerns about vendor lock-in. Well, developers are part of the collective enterprise, and they are not immune to the concerns of lock-in. An impressively sharp group of people, developers know that what’s ‘now’ is not what is necessarily ‘forever.’  They would rather know that the time and effort they are putting into learning how to leverage a particular cloud solution will not be wasted if their company decides to steer towards a different direction. Open cloud solutions and especially open cloud application platforms are incredibly important in ensuring developers that the work they are doing now will translate quite well to other cloud platforms. Make sure you approach is one that embodies the notion of openness if you hope to pull in developers.

For many of you, I am sure all of the above is common knowledge at this point. For those of you that have been involved in the cloud wave for a while, you know that developers have been the driving force for some time. I also believe that this signals a need for a significant shift in direction for some companies that wish to be notable cloud service providers. We are living in an environment that is increasingly driven from the bottom up. You either adapt to this new order, or you risk seeing your long-term viability threatened!

The pitfalls of cloud computing

December 6, 2011

There is no shortage of information concerning the ‘right’ way to do cloud. Don’t believe me? Just google ‘adopting cloud computing’ or ‘implementing cloud computing’ and prepare to be bombarded with advice. Don’t get me wrong. I am not attempting to impugn this kind of material or otherwise demean it. I am simply saying that if you want to read about how to successfully adopt, implement, or otherwise begin to use cloud computing, you have a wealth of information at your disposal.

On the other hand, I doubt you will find as much information on the opposite of cloud success, which is of course, cloud failure. This is to be expected. I mean, we all want to focus on the positives right? Yet, at the same time, it is a little odd since I happen to believe that we learn quite a bit from failure. To me, there is no better lesson than one learned the hard way. With that in mind, allow me to share some first-hand hard lessons from adopting the cloud.

If you can’t standardize, you will fall short: A key tenant of cloud computing is automation. More to the point, cloud gives users the opportunity to completely automate the installation, deployment, and configuration of their application environments. That said it is impossible to achieve high levels of automation without a fairly high degree of standardization. For automation to work there has to be well-known bounds within which environments will reside. If you cannot drive a culture of standardization (one where not every request is a ‘custom’ request), then you will find automation to be fairly difficult. Without the clear benefits of automation, many in your organization may be left scratching their heads about the real value of cloud.

If change is taboo, look out: Einstein is well-known to have said that ‘Insanity is doing the same thing, over and over again, but expecting different results.’ In this context, I am consistently amazed by users that want to adopt cloud computing, but are EXTREMELY hesitant to change anything about what they do now. Guess what? This rarely works. If you want to start leveraging cloud concepts, you must be ready for change. This may be change to processes, organization, culture, or any number of things. If you can’t or won’t change, then you will struggle with cloud computing.

No integration means no joy: I am of the opinion that the best cloud approach is one that does not subscribe to the ‘one tool to rule them all’ mantra. Instead, I encourage users to consider the tools best-suited to tackle their most valuable use cases, while keeping in mind the integration capabilities of the various tools. If you find yourself looking at a tool that provides no means for integration into various other points within your enterprise, do yourself a favor and just stop! Nothing good can come of an impenetrable black box.

Piece-meal automation will fall short: Not to belabor the point, but automation is a pretty big deal when it comes to cloud computing. After all, it is a big reason why cloud promises both speed and consistency of service delivery. However, if you find yourself looking into a solution or approach that means adding just a little automation here and there to an otherwise heavily user-driven approach, turn and run! Sprinkling automation around like pixy dust rarely results in magic. In fact, it usually turns out to be counter-productive. To be clear, I am not saying that everything has to be 100% automated for cloud to work. That is not realistic in the least, but wedging a little automation in between mostly manual processes is not helpful either.

I realize that the above may make you think that I am not an overly positive person, especially when it comes to cloud adoption. That could not be further from the truth. I would say that I am cautiously optimistic and eager to understand both what does and does not work. Luckily enough, I have been part of more than a few cloud adoption projects, and I have seen both the good and the bad. I hope that passing along some of my view points is helpful, and I am always eager to learn more from my readers. Feel free to pass along your own experiences and feedback!

PaaS and the application server

November 10, 2011

For the last 5+ years, I have been deeply focused in the area of middleware application servers and the container services that they provide. If you go a bit further back and look at the arc of progression in this space, you will see multiple pivots of innovation for these servers. In the beginning, it was all about transactions, and then JEE was added, followed closely (and probably most recently) by a push around SOA. While none of these topics have gotten less important over time, they become an assumed part of application server runtimes and architectures.  That is, customers will not consider an application platform that does not at least support those technologies and paradigms.

Does that mean that the application server space is dying down in terms of innovation and change? Not at all in my opinion. Just like JEE was there to push transactions and SOA was there to extend the innovation around JEE, a host of other hotspots are appearing that are demanding inventive changes in the application server space. One of those is the notion that an application server in and of itself is not the unit of purchase and use any longer. Instead, users want entire platforms that not only provide a robust container for their applications, but also surround those applications with other capabilities and qualities of service. This is definitely an interesting movement, and one that is leading to advances in the state of the art concerning elastic caching, policy-based application management, plugin driven architectures, and more. As intriguing as this notion is, I actually think there is something that will soon push traditional application server runtimes even further: PaaS.

A cursory look at the PaaS industry will quickly reveal that we are at the forefront — I mean the very forefront — of a potentially thunderous shift in the enterprise application space. Initially there has been a large focus on making it fast and simple to provision and manage applications on top of traditional middleware stacks. Of course this cannot and will not be the focus forever. At some point, the gaze of the industry will shift, and we will start to fundamentally rethink what that middleware stack is, what services it delivers, and how it functions. What does this mean? Well, I am sure that many of you have your own ideas, but allow me to offer up a few of mine as food for thought:

Application servers and the kernel blend: There has been a lot of talk about the commoditization of the operating system, but I can tell you from first-hand experience that much of that has been premature. That said, as we move to more PaaS-like models, the OS and basic application server will blend together much more. Application servers will be extremely lightweight, and provide basic container support for your applications. Beyond that, I believe we will see a pull model based on need… which leads me to my next thought.

Application servers become composable: In the cloud we want speed. Well, with traditional application servers, you can only push the limits so far. Sure, you can pre-install, pre-configure, etc., but you still have to deal with the unique configuration for each instance. In some cases this takes a while simply because there are tons of features to configure. What’s more is that some of these features are not actively used. In this light, I think PaaS will force application servers to take on more of a fit-for-purpose model. Features and components will be pulled in when they are needed and not a moment before. Further, those same features will be turned off when they are not actively consumed. This has benefits beyond speed of course, namely a consumption footprint more appropriately aligned with use.

Application virtualization becomes a core competency: We can have esoteric arguments about whether cloud computing requires virtualization, but what is the point? Virtualization is elemental to nearly all cloud-based solutions (especially at the IaaS/PaaS layer). That said, most of the virtualization innovation has been focused on server virtualization and management. Going forward, those same concepts will be pushed up to the application. The capability to create virtualized pools of application instances, supporting extremely high densities and reducing management burdens of large environments will become the norm. In fact, those capabilities will become a core part of the application container.

These are but a few of the innovation areas I anticipate that PaaS will drive into the middleware application server space. You may think these are wildly off base, and you may have your own ideas about what PaaS is going to drive into this area. Whether you agree or vehemently disagree, I would like to hear what you have to say!

Unwrapping the self-service cloud

October 4, 2011

If you get a chance to talk with directors or C-level executives about the benefits they expect to derive from adopting cloud computing techniques, expect to hear the terms ‘flexibility’, ‘agility’, and ‘cost reduction’ come up quite frequently. While those are valid expectations, those of us a little closer to the trenches know that it takes a number of different technical capabilities to actually deliver those benefits. Oft-talked about capabilities such as elasticity, rapid provisioning, and configuration automation all come to mind. However, there’s one more capability that we tend to talk about a little less, but nevertheless is equally as important. That capability is self-service access.

It seems like lately, more and more of the clients I have been visiting with are coming to terms with what self-service means to them and what it will take to enable it. Here is the unvarnished truth about effective self-service in the enterprise: It is far from easy! Numerous obstacles stand between the way traditional IT works and a truly self-service organization and they are not the kind of obstacles one can blithely ignore.

A single post could never explore every single challenge, but I would be remiss not to mention the big ones, starting with the collective fear of losing control. For years and years, IT organizations have arranged themselves in teams that are more or less domain specific. To put it a little more bluntly, if you look into any typical shop, you will probably find an infrastructure team, systems software team, middleware team, application team, and more. In most cases, well-defined processes (e.g. ticket requests) specify how these teams interact. The interaction is typically limited to the boundary of these interfaces, and each team more or less independently handles the domain over which they reside. Can you see how self-service may be an affront to such a structure?!?

Remember, when we talk about the type of self-service often bandied about in cloud computing, we are not talking about your garden variety self-service access. Self-service in the cloud normally means that a single user is provisioning (in a loose sense at least) everything they need to run a particular workload – from the hardware to the application. To say this causes some amount of organizational strife is a bit of an understatement.

Of course, traditional inter-team relationships are not the only barriers standing in the way of self-service models. On a recent trip, I listened to a client explain their intentions to move their development and test operations to a cloud-based environment. In this environment, developers and testers who needed application environments would directly provision them from a standard offering catalog. In the discussion it soon emerged there were many different sub-teams within the development and test teams and that made for special considerations in the sharing of resources. For instance, some teams required more resources than others. Some teams required access to systems outside of the cloud. Still yet, some teams needed to dynamically expand their consumption of resources even at the expense of the other teams being able to consume those resources. In short, there was a complex web of resource consumption needs among the teams.

Now, you may ask, ‘What does this have to do with self-service access?’ Well, there is absolutely no way you can expose these complex dependencies of resource relationship to end users (the developers and testers). How far do you think this company would get with self-service deployments if the deployer had to figure out what resources (hardware, storage, networking, and software) they could safely use before doing anything? Right, not very far! So the trick is, by the time a developer or tester logs in, the decision about the resources to which they are entitled must have already been made. This implies a complex system of rules that considers the entitlements of the current user in relation to all other teams in the organization. And don’t forget, these entitlements could very well change over time.

This may sound like I am constructing a false barrier to self-service, but I can assure you these sorts of resource sharing requirements are not at all unique. The basic problem is not one that is easy to solve, and it is even harder to do so in a way that is somewhat consumable to an administrative user in charge of the whole thing. Yet, without reasonably evolved resource sharing capabilities, there is really no way to enable self-service access to multiple different classes of users across a shared resource pool.

Given the two (but not only) significant self-service adoption obstacles of cultural churn and effective resource sharing, what are cloud providers to do? First, I believe it is important for cloud providers to acknowledge the typical division of responsibilities in an organization. When designing a solution that will harness resources that fall across traditionally isolated domains, it is important that the design accommodates different types of users. Accommodation means different users act on different resources (and those resources only), and it means that users are presented with a familiar context.

On the resource sharing side, every different resource that makes up the cloud must have associated access rights tied directly to users or groups.  This is certainly not a novel concept, but you may be surprised at how often a particular solution overlooks or under-delivers on this point. It is not enough to simply say that a user has access to a particular resource. One must be able to partition a resource and assign those logical ‘slices’ out to different users or groups. To evolve the concept further, those slices should be able to dynamically grow and shrink based on defined conditions or rules.

There is little doubt that self-service is a critical aspect to the cloud, and it is in fact a key capability in delivering on the promise of cloud. Having said that, I believe there is a lot of room for maturity in this specific area, and providers will have to address the challenges I mentioned above and a host of others. Taking all of this into consideration, I can confidently say that we will see quite a bit of focus on this as we move forward in cloud. What do you think?

The value of workload-aware management

September 23, 2011

A couple of weeks ago, I dropped by the Intel Developer Forum to present a session and listen in on a few others. As always in these types of shows, I learned quite a bit. Most strikingly though, I was reminded of something that is probably quite obvious to many of you: Consumer interest in cloud computing will not be letting up any time soon.

Based on this, and some of the other things I heard at the show, I decided to catch up with fellow IBMer Marc Haberkorn. Marc is an IBM Product Manager and is responsible for IBM Workload Deployer amongst other things. I asked him about IBM Workload Deployer, the competition, and cloud in general. Check out what Marc had to say below:

Me: IBM Workload Deployer is one among many of a growing wave of cloud management solutions. How do you differentiate the focus and business value of it versus the myriad of other solutions out there?

Marc: To sum it up, we offer a combination of depth and breadth.  IWD delivers both workload aware management and general purpose management.  Workload aware management differentiates IWD from its competition, as it can deliver more value for the set of products for which it has context.  There is a set of actions that workload aware management tools can do that is normally left to the user by general purpose management tools.  This list includes configuring a middleware server to know its hostname/IP address, configuring multiple middleware servers to know of one another, arranging clusters, applying maintenance, and handling elasticity.  By handling more of these activities in the automated flow, there are fewer chances for manual errors and inconsistencies to enter a managed environment.

That said, without infinite resource or time, it’s impossible to deliver this context-aware management for everything under the sun.  As such, in order to allow IWD to deliver differentiated value AND allow it to handle a customer’s entire environment, we offer a mix of workload-aware management and general purpose management.

Me: VMware is a good example of a company active in the cloud space, and they seem to keep a consistent pace of new product delivery. What do you think of their product development focus?

Marc: I think VMware has built a very compelling set of capability in the virtualization space.  I think the main difference between VMware’s suite and IBM Workload Deployer is the perspective from which the environments are managed.  VMware puts the administrator in the position of thinking about infrastructure from the ground up.  The administrator is thinking about virtual images, hypervisors, and scripts.  In IBM Workload Deployer, we think about things from the perspective of the app, because that’s ultimately what the business cares about.  By providing a declarative model through which an application can be instantiated and managed, we feel we deliver a deeper value proposition to clients, through workload-aware management.

Me: The ‘one tool to do it all’ approach is a popular, if not hard to achieve goal. What is your advice to users when it comes to choosing between breadth and depth for cloud management solutions?

Marc: The advantages of a “one tool to do it all” are many: less integration, more uniformity, less complexity.  As such, customers will always prefer a single tool when possible.  This is why IBM Workload Deployer has focused on not only providing differentiated, deeper value for common use cases but also providing a way to handle the “everything else.”  As such, my advice to users is not to choose between breadth and depth – use IBM Workload Deployer which offers both.

Me: To close, I’m curious to know where you think we are heading in the cloud market. What do you think users will be most readily adopting over the next one to two years? Where does the cloud industry need the most innovation?

Marc: I think most users are currently looking at the broad picture of cloud computing, and have been adopting primarily in the private cloud realm.  There are several reasons for this.  One reason is that many customers have a large set of hardware resources which amount to sunk cost that needs to be leveraged.  Another reason is around data security concerns in off-premises clouds, and still another reason is around the human factor of comfort, which has taken time to develop around off-premise cloud models.  However, businesses have become increasingly comfortable with various sources of outsourcing in recent years, especially in mission critical areas involving very sensitive data.  Just look at IBM’s Strategic Outsourcing business, which handles entire IT operations for many large businesses.  I think that trend will (and really, has already begun to) continue in the area of cloud computing, and will lead to more public and ultimately hybrid cloud computing adoption.  In order to get to hybrid cloud computing, I see much of the focus and innovation being associated with data security, workload portability (across private and public, in a seamless fashion), and license transferability between private and public.  When this space reaches fruition, clients will be able to enjoy true elastic economics in a computing model that allows a mixture of owning and renting compute resources and software licenses.

Me: Thanks Marc!

What is an application?

September 8, 2011

What is an application? Well, like many things the answer varies widely based on who you ask. Consider the case of an online ticket booking application. If you ask the users of the application what the application is, then it’s a web page or a series of web pages. If you ask the application development team, they will say it’s a collection of source code, configuration files, static content, etc. The infrastructure team will tell you the application is the application server, web server, and operating system. If you go further and ask the hardware folks what they think, they will likely say the application is the virtualization platform running on their server. As you see, a question seemingly innocuous as ‘What is an application?’ is actually anything but.

Now, you may say, ‘Who cares?!? Is it important to agree on what an application is?’ In general, I would say no. In my mind, it is not all that important that different facets of an organization agree on what they consider an application to be. As long as they can communicate their view point to others, everything should proceed reasonably smoothly. However, in the world of PaaS, I believe that an agreement on what the term application means is of paramount importance.

I have discussed the many different approaches PaaS providers can take concerning the delivery of their product. I think the particular approach one chooses is a critical choice, but a look at the bigger picture reveals a larger set of challenges. The move to a hybrid cloud approach is a shift that seems inevitable. This shift to hybrid clouds will encounter many issues, and the shift to hybrid PaaS environments will not be exempt. From a technical standpoint, one of the key obstacles that hybrid cloud computing needs to confront is the need for meaningful commonality of management. We all know that the key unit of management depends on the type of cloud (IaaS vs. PaaS vs. SaaS), and in the context of PaaS the application is the focal point.

This brings us back to my original point of defining an application. How can a hybrid cloud platform for PaaS provide meaningful commonality of management if there is no general consensus on what constitutes an application? The answer is simple: it can’t.

Let’s consider a simple example to put some concreteness around this. I have a hybrid PaaS platform, and I log into the management console to manage an application that is potentially running in multiple cloud environments. I decide that I want to attach a policy to that application that describes scaling behavior – perhaps based on response time. There are two things that can happen at that point. Either they hybrid cloud platform takes this directive from me and translates it for each and every cloud provider, or the hybrid cloud platform sends this information in a manner every provider can interpret and enforce.

I think, or hope, that we would all agree the latter option is more tenable. If a hybrid PaaS tool can only give meaningful management if it knows lots of details about each underlying cloud platform, well, we won’t have many hybrid PaaS tools. That said, there is no way for the latter option to occur in the absence of some sort of standard. That includes a standard (I don’t mean standard in the literal IT sense necessarily) of what an application entails. After all, how could you enforce a scaling policy on an application if you cannot determine what an application is??

Considering this, what would you say an application really is? Is it a grouping of HTTP URIs? This seems to be the easiest way to describe an application but would be lacking in many scenarios (EJB applications to name but one). I wager that if you really sit down and think about a common way to describe an application you will end up debating with yourself! I think the issue of defining an application is critical toward the ongoing success of PaaS, and I also think it serves as a reality check for hybrid clouds in general. While there is definitely a larger shift toward the hybrid approach, we clearly have a long way to go.

An eye on the competition

August 9, 2011

When it comes to IBM Workload Deployer, I have no illusions regarding the veracity of our competitors. They are out there, and they are constantly on the attack. Their dubious claims aside, I know this because I still get asked quite frequently to explain the benefits of IBM Workload Deployer versus some other general purpose cloud provisioning and management solution. So, while I have done that many times in various forums, I figured it was time to yet again address this question.

 

When comparing IBM Workload Deployer to the other available solutions, I honestly feel comfortable saying we have no direct competition. I know you believe me to be biased, and rightly so, but let me explain why I think the competition is much more perception than reality. To do this, I want to focus on the patterns-based approach that IBM Workload Deployer takes to cloud provisioning and management.

 

Let’s start with virtual system patterns in IBM Workload Deployer. Virtual system patterns allow you to build and deploy completely configured and integrated middleware environments as a single unit. These patterns build on top of our special IBM Hypervisor Edition images that bottle up the installation and quite a bit of the configuration of the underlying middleware products. Further, when using virtual system patterns, IBM Workload Deployer manages and automates the orchestration of the integration tasks that need to happen to setup a meaningful middleware environment. For instance, when deploying WebSphere Application Server you do not need to do anything on your end to deploy a clustered, highly available environment. When deploying WebSphere Process Server in this manner, you do not need to take any administrative actions to produce a golden topology. You just deploy patterns and the images, patterns, and appliance take care of the rest. Of course, you can add your own customizations and tweaks in the pattern, but we take care of the common administrative actions that would otherwise require your care.

 

I am not sure of a better way to say it, so I will be blunt: When deploying products delivered in IBM Hypervisor Edition form, no other solution compares to the virtual system pattern capability offered by IBM Workload Deployer. It is not even close. Can you provision products like WebSphere Application Server or WebSphere Portal using other cloud provisioning tools? Sure, but you should be aware that you will be writing and maintaining your own installation, configuration, and integration scripts. It is also likely that you will end up developing a custom interface through which deployers request your services (something not necessary when using the rich IBM Workload Deployer UI). All of this takes time, resource, and money. More importantly, this is not differentiating work and distracts from the real end goal: serving up applications. IBM Workload Deployer can deliver this operational capability right out of the box, and it can do so in a way that costs less than custom developed and maintained solutions.

 

When considering IBM Workload Deployer versus the competition, it is also important to consider the new virtual application pattern capability delivered in version 3.0. The virtual application pattern capability is a testament to IBM’s thought leadership in, and commitment to cloud computing for middleware application environments. Virtual application patterns take a bold step forward in raising the level of abstraction beyond the middleware environment and up to the most important resource in enterprise environments: the application. With a virtual application pattern, you simply provide your application and specify both functional and non-functional requirements for that application. When ready, you deploy that pattern, and IBM Workload Deployer sets up the necessary middleware infrastructure and deploys the provided application. Moreover, the appliance will monitor and autonomically manage the environment (i.e. scale it up and down) based on the policies you specify. Quite simply, this is a deployment and management capability our competition cannot match.

 

There is more to consider than just patterns though. The appliance makes it really simple to apply maintenance and upgrades to environments running in your cloud. It can autonomically manage your deployed environments (through policies in virtual application patterns and the Intelligent Management Pack for virtual system patterns), and it effectively abstracts the underlying infrastructure of your cloud environment. This abstraction is the reason IBM Workload Deployer can deploy your environments to PowerVM, zVM, and VMware environments. It also makes it easy to deploy the same environment to multiple different underlying platforms, thus accommodating typical platform changes that happen as an application moves from development to production. The best part of all is that the deployer’s experience is the same regardless of the underlying infrastructure since the appliance hides any platform idiosyncrasies.

 

The bottom line is that the appliance is purpose built to deploy and manage middleware and middleware application environments in a cloud, and as such, delivers immense out-of-the-box and ongoing value in this context. I should also point out that the design of the appliance acknowledges its purposeful nature. The CLI and REST API interfaces allow you to integrate the appliance into the operations of those general purpose provisioning solutions. In this way, IBM Workload Deployer acts as a middleware accelerator for your cloud computing efforts. This means that if you do have a general purpose solution, IBM Workload Deployer can still provide considerable value and let you avoid developing a considerable subsystem dedicated to deployment and management of middleware in the cloud. We believe in this type of integration, and have in fact built it into our own IBM solutions.

 

There is certainly more to IBM Workload Deployer and its differentiating value, but I think the above is a good start. When it comes down to creating clouds focused on middleware platforms and middleware applications, nothing stacks up to IBM Workload Deployer.

How are you building your cloud?

July 30, 2011

“What are the use cases you want to pursue with your enterprise cloud project?” The question seems innocuous enough, but I can tell you from first-hand experience that it is anything but. If you asked five people from different teams within the same company to each give you three use cases, you would end up with 15 different scenarios. Such is the case in the current enterprise landscape where points of view on cloud vary about as much as you can imagine. Each organizational fiefdom has its own set of pain points, challenges, and initiatives to tackle, and each one of those teams have an idea on how the cloud can help.

Being on the service provider side, listening to users roll off a burgeoning list of usage scenarios is equal parts exciting and disconcerting. I find myself caught somewhere between, ‘Wow, that sounds cool and fun!’, and ‘How am I going to show that in an easy, yet valuable manner?’ It is not an easy task. Lest you think I am out looking for pity, I sympathize even more so with the company. I especially feel for the person on the consumer side who gets the task of leading the ‘enterprise cloud initiative.’ Talk about a thankless, exhausting, and challenging role. On a given day, the only team that is rallying behind you is the one who thinks you are about to implement the product they are championing!

Being a part of many of these endeavors, I like to think that I have picked up a lesson or two applicable to most cloud implementation projects. From the service consumer side, I feel compelled to warn about what seems to be transforming into an anti-pattern among cloud adoption. While it is fine, and in fact advisable, to have an enterprise cloud strategy, it is not usually so wise to attempt to fulfill that strategy with a single tool. This comes back to a question I have posed before: Do users need a single tool or a single pane of glass? Well, judging from the projects I have seen so far, users need a single pane of glass far more than they need a single tool.

The problem with adopting a single tool that purports to deliver on every one of the use cases for an organization’s cloud strategy is that implies quite a bit of work on the user’s part. It should be pretty obvious that as a tool’s flexibility increases, its complexity usually does as well. From a cloud provisioning and management standpoint, this usually means that you will be doing a lot of scripting for installation, configuration, integration, orchestration, and management. It’s not that these tools do not have their place, but my main caution to potential users is to carefully consider just how many use cases they can realistically address with this approach given their timeline. I have seen a few projects go way over their target dates due to severe underestimation of the effort required for this kind of work. Oh, and don’t forget all those scripting, orchestration, and managmenet assets require care and feeding over time, so it’s not just the initial effort to consider.

Given the potential pitfalls of a single tool approach, I usually advise users I work with to consider a careful balance of general purpose and purposeful solutions as a way to fulfill their enterprise cloud strategy. While purposeful solutions do not deliver the flexibility of its general purpose brethren, they typically provide much more out-of-the-box value. The design and implementation of these solutions focuses on handling a subset of use cases really well, thus severely reducing the time and cost of addressing those particular needs. Many of the assets users would otherwise build and maintain themselves, come embedded in the solution.

There is on big caveat to a blended approach. The integration between all of the different solutions must be easy enough so that the wiring work does not require as much effort as the single tool approach would. This is mainly a challenge to the service provider side of the house. I believe providers, especially those like my employer and others that address multiple different layers of the cloud, are starting to recognize this need. It is one thing to be able to answer an enterprise’s cloud use cases with a set of products. It is quite another to enable an enterprise cloud with a single, well-integrated, consumable solution!