Archive for the ‘Uncategorized’ Category

Introducing IBM Worklight v5

June 26, 2012

Not too long ago, I wrote a post about IBM’s then very recent acquisition of a mobile application platform company called Worklight. Given the fact that is has been a while (at least in mobile market time) and IBM recently released (6/15) the new IBM Worklight 5.0 product, I thought it was a good time to revisit the topic.

First things first, the acquisition of Worklight by IBM and the subsequent release of IBM Worklight does not change the nature of the solution. It is still a mobile application platform, and it still provides the end-to-end capabilities that you probably (or at least should) expect from such a solution. At a high level, those capabilities address the following needs:

Application development and build: IBM Worklight provides a robust development environment called IBM Worklight Studio that allows you to quickly construct mobile applications for multiple mobile operating system platforms. The Worklight Studio does not constrain you in terms of mobile application architecture (native, hybrid, mobile web), nor in terms of the libraries (JavaScript and otherwise) that you choose to use in your mobile application. Furthermore, the development environment includes an integrated build engine that makes it easy for you to initiate builds to produce device-installable binaries from a central location.

Application runtime: IBM Worklight provides mobile application runtime capabilities that span from the mobile device to the enterprise backend through the IBM Worklight Device Runtime and the IBM Worklight Server respectively. The device runtime provides a rich set of APIs that are cross-platform in nature and offer easy access to the services provided by the IBM Worklight Server. Additionally, the device runtime provides a key-value, encrypted cache that your mobile applications can use to store data locally (on the device) in a secure manner. This cache can be instrumental in enabling applications to function offline and in improving overall application performance.

On the backend, the IBM Worklight Server provides services such as data integration, data transformation, application security, device authentication, direct application update, auditing, and more. It is worth pointing out that the IBM Worklight Device Runtime and the IBM Worklight Server are engineered to work together. This ensures your mobile applications have easy access to the services provided on the server, and it means there is a mechanism to deliver value right out of the box. There is ample evidence of the value of this integration, from the ease through which you can integrate existing enterprise information systems with your mobile applications to the ability to directly and proactively update application assets that are running on a user’s device.

Application management: Of course just as important as providing for the ability to develop, build, and run mobile applications is enabling you to effectively manage and administer said applications. IBM Worklight provides a web-based console that delivers an easy to use UI for leveraging some of the management features of the IBM Worklight Server. From the console you can manage multiple versions of an application across multiple platforms, send notification messages to application users, remotely disable specific application versions to prevent user access, and manage push notification sources and targets. Additionally, the console provides reports based on the audit data captured by the IBM Worklight Server. These reports provide important information about application usage such as application visits per day, total visits per application, newly detected devices per application, and more. Furthermore, the source data for these reports can be exported into your BI or analytics tool of choice.

I started off by mentioning the recent release of IBM Worklight v5, so I think it is appropriate to do a quick rundown of a few of the enhancements made in that recent release:

WYSIWYG editor for quickly building mobile application UIs in a drag and drop fashion. The new Rich Page Editor can be used to quickly build mobile UIs using HTML5 and Dojo Mobile constructs. In no way does it limit the control you have over the underlying application source code, but it does make the construction of the views in your mobile application much simpler.

Integration with IBM Cast Iron via an out of the box adapter. This adapter allows for bi-directional communication with IBM Cast Iron thereby delivering a rich set of application connectivity options right out of the box.

Simulator for quickly testing mobile applications in a browser. The new Mobile Browser Simulator allows you to test your mobile application in a browser while simulating many different device form factors across a number of different platforms.

Channel for easily sharing mobile applications during the development and test cycle. The new IBM Application Center makes it easy to share mobile applications amongst important stakeholders of a project while enabling those stakeholders to immediately provide feedback about the application.

Much more. Don’t believe me? Check it out for yourself by visiting the developerWorks site for mobile development. On that page, you will find links to download the new IBM Worklight Developer Edition which provides the end-to-end capability of the platform installed as a set of Eclipse plugins. That’s right, the only thing required to install and get your hands on the product is an Eclipse environment! Furthermore, it is completely free to install and use on your local machine!

Okay, so this has gone on a little longer than I intended, but I wanted to wrap up by addressing one more thing. The entire mobile space is hot, and it is crowded with technology providers. A question that I often get asked from my clients is ‘Why IBM Worklight?’ While the answer always varies based on the unique scenario, it often comes back to a simple but profound statement: ‘IBM Worklight delivers a truly open approach for developing and running mobile applications.’ This is first encountered in the development environment where you can choose both the mobile application architecture and libraries that you want to use to do the job. It extends to the runtime by allowing you to quickly and easily integrate with a heterogeneous ecosystem of enterprise information systems. To be sure, there are many valuable and differentiating features and functions of the platform, and if you are so inclined I strongly encourage you to dive deeper into it (after all, it is free to try). That said, I do not think I can understate the value of a truly open platform!


The Converged Application Container

March 30, 2012

Just about everything I do in my job points back to a construct that is a foundational part of the modern enterprise IT organization: the application server. Now, when you say ‘application server’, thoughts immediately wander to Java. More specifically, folks tend to think about servers that support traditional Java EE applications and rightfully so. Over the past ten years, the predominant type of application server has been of the Java EE variety, and the industry has experienced tons of innovation in this area with more undoubtedly yet to come. But is the term ‘application server’ only applicable when we are talking Java?

The one-word answer to that question is ‘no’. While the strong association exists today thanks to a widely adopted programming model (widely adopted for many good reasons I should say), I cannot help but wonder what associations we will make with the term ‘application server’ three, five, or ten years from now. I say this because I believe we are going to witness interesting changes in the application server space. The rise of cloud and mobile along with the persistence of scripting and functional programming languages will, in my opinion, force radical evolution and innovation in the application server. Most importantly, the application server will transform into a converged container for hosting all manner of application types.

That’s right. The traditional, Java-centric application server is going to morph into a container that is much more multi-purpose than it is today. Of course, this is not going to happen without the advent of some key functional enablers. While impossible to list or event predict all of them, here are a few that I believe will be important in this movement:

1) Programming language/model normalization services: Before the idea of a converged application container can proceed, there will have to be some sort of translation approach built into the system. This translation system will be responsible for normalizing the different programming languages and models used by applications within the server. The normalization will have to be to some sort of common language (perhaps Java) supported by the server, but the idea is that this action will be mostly transparent to the application.

2) Functional composability: Due to the need to support a diverse set of application types, a converged application container will actually possess a wide-ranging set of capabilities. In pulling in these capabilities, it is not a stretch to say that bloat could become a problem. This bloat could surface itself via conditions such as excessive memory usage or large disk footprints to name but a few. In order to avoid this, the application container must be composable. That is to say, the container must be able to dynamically enable features it needs based on the application types that it is supporting. It is interesting to think about the role that OSGi may play in supporting this kind of approach.

3) Context-driven management services: Just as the functionality of the container must be built via modular composition, the management services it offers up must be right-fit and contextually aware. The application container must be able to enable and deliver the right set of management capabilities based on the applications it is hosting. Managing a mobile application can be markedly different than managing a traditional Java EE enterprise application. Since the same container may be hosting either type of application, it really needs to be able to offer up management interfaces that are appropriate to either. Just as with the functional set of capabilities, it must do this without suffering from bloat. In short, management capabilities must also be composable, and they must be contextually aware.

The idea of a converged application container may seem pretty far-fetched to some, but I truly believe it will be an area ripe with innovation for the next three to five years. Albeit just a select few, the above capabilities will play an important role in making the promise of converged containers real. Enough from me though! I’m curious to know what you think about this whole idea. Reach out to me on Twitter @damrhein and let me know!

Constructing clouds within IBM

October 13, 2010

While it is nice to consider the potential benefits of cloud computing for a particular organization, it is even better to see concrete results from a cloud implementation. I recently got the chance to do just that when I worked with one of our IBM test organizations to document their adoption, implementation, and return on investment from a private cloud project.

More specifically, I studied private cloud use by our WebSphere Application Server Test Organization. To give a little background on this team, they are responsible for continuous integration and regression testing for our WebSphere Application Server project. These tests are crucial to the development organization as they are the lynchpin to enabling agile development. As if that were not already a big enough challenge, they have to provide these services for eight different concurrent releases of the product on a daily basis. This results in the use of 2000+ machines, 400+ engineers (including development), and the running of over 2,000,000 tests each and every day.

Under increasing pressure to go faster and run more tests without the benefit of additional resource (people, money, etc.), the team introspected on their current processes and identified key areas where improvement would equate to increased efficiency. In particular, the team identified the following pain points in their lab:

– Under-utilized hardware: On average, the team found that their 2000+ machine lab experienced utilization rates between 6-12%.

– Inconsistent provisioning: The test organization already had a significant automation framework that enabled them to provision fully functional and customized test environments in only three hours. However, anywhere from 20-50% of provisioning actions resulted in environments not fit for use.

Manageability of infrastructure: The team provided testing environments and services for many small, agile teams. As the number of their customers grew, it became harder to effectively manage hardware and software resources for these teams.

The leadership in the test organization felt if they addressed these inefficiencies, they could continue to quickly deliver quality testing services in an impressive volume. Not long after identifying these problems, they decided to pursue a cloud computing approach to help them deliver testing environments for their various constituents.

In particular, the team turned to the WebSphere CloudBurst Appliance as a means to rapidly setup WebSphere Application Server test environments within their existing lab. For them, this solution delivered many key advantages and benefits:

Reuse of existing hardware assets: The organization did not purchase new hardware. Rather, they used the existing, under-utilized hardware and drove utilization rates from 6-12% up to an average of 60%.

Reuse of existing software/testing assets: The team did not write new test applications, test frameworks, configuration scripts, etc. They were able to reuse these assets as they were since WebSphere CloudBurst does not change the fundamental nature of the WebSphere Application Server.

Increase in provisioning consistency: WebSphere CloudBurst patterns gave the team a means to build and persist units that represented their fully configured WebSphere Application Server test environments. Any time they needed an environment, they simply deployed a pattern and they could be extremely confident that the resulting setup was fit for use. They decreased provisioning failures by 45%, with most remaining failures due to resource capacity issues.

Significant ROI with incremental adoption: In the initial pilot, the test organization dedicated around 6% of their lab for use with WebSphere CloudBurst. With just this minimal upfront resource investment, the team realized $500,000 in direct savings and another $2.1 million in efficiency gains bolstered by a significant increase in administrator efficiency.

There is more to the study of the WebSphere Application Server Test Organization and their private cloud for testing environments. There are also important lessons one can learn from their process of adoption and implementation of WebSphere CloudBurst. While some lessons are specific to WebSphere CloudBurst, many are generally applicable for anyone constructing a private cloud within their organization. If you happen to be at Cloud Expo West in November, I will be talking in detail about this test organization and their private cloud. I hope you have a chance to stop by!

The intersection of legacy process and cloud computing

October 4, 2010

One of the really cool things about getting out and talking with users about different cloud computing approaches is that, in many cases, you get to pose radically different ways of doing things within an organization. In some discussions, you can practically see the wheels turning behind the eyes of the audience as they think what this new approach means to them, their teams, and their organization. This usually results in some fun (and challenging) conversations about what cloud adoption means to the existing operational process of users.

I spend much of my time talking with users about lifecycle management for middleware application environments. Anyone familiar with these environments knows there is nothing trivial about them. They are made up of many different parts (operating system, middleware software, applications, etc.), and each of those parts usually involves their own setup and configuration. Of course, not only does each part require its own set of installation and configuration steps, there is also the need to configure integration between the parts, and there are ongoing operational needs as well. To accomplish the various installation, configuration, integration, and operational activities, companies define processes that they typically assign to different teams, thus leading to the all too familiar organizational silos.

When talking to an organization about the nuances of their approach for managing application environments, I always find it interesting to see how a cloud computing approach affects their organizational silos, and thus their operational processes. While these effects are sometimes viewed in a negative light (mostly because they challenge the status quo), it is the job of the cloud provider to show value and benefits in light of any adoption hurdles.

To consider the operational effects cloud can have on application middleware environments, let’s consider a common scenario: updating a deployed application in a middleware environment. Whether you realize it or not, over the years, the middleware industry has implied a process for carrying out application updates. This process means that users update applications in-place without addressing the underlying infrastructure. This methodology is really born from two historical challenges associated with middleware environments:

–         It took to long to install and configure the environments

–         It was too hard to consistently configure the same environment

Now, there is nothing inherently wrong with this process of updating deployed applications, and in some cases, it is the right approach. However, in other cases I have heard users express frustration with this approach. In some cases, users want to deploy application updates into a green field. In other words, they want to deploy their newly updated application onto a newly installed and configured set of application infrastructure.  This is especially true for development and testing environments where users want to eliminate the likelihood of infrastructure configuration drift, thereby ensuring they are testing only the new behavior of the updated application.

Based on previous techniques for managing middleware application environments, the green field application update scenario was hard to support in a viable manner. On the other hand, I show users quite often how cloud computing solutions make this scenario easy to support from a technical perspective. A cloud-based approach makes it possible for users to define and persist templates or patterns that represent their fully configured application environments. Effectively, when combining these patterns or templates with technologies like virtualization, one can provision updated applications on top of a clean infrastructure install in a matter of minutes. Further, they can rest assured knowing that they can faithfully reproduce these environments repeatedly.

Now, the real fun starts when we consider these technical benefits and the capacity of a given enterprise to absorb such approaches.  As I said earlier, often times when I discuss and show these new cloud-based approaches to old challenges, I can almost see the internal contemplation among the audience members. This contemplation is not so much an attempt to grasp any sort of complex technical detail of the cloud-approach, but instead, it is an attempt to reconcile what they see proposed versus what they do now.

Consider the application update scenario again. If I can outline an approach whereby a single user can provision an entire application infrastructure environment, including the updated application, in a matter of minutes, what does that mean to the organization? While it certainly provides a real benefit for some subset of the application update scenarios, it is almost certainly at odds with the current operational processes. Can the enterprise adopt an approach that condenses a set of actions typically requiring multiple teams to carry out multiple steps independent of each other? Ultimately, the success of cloud adoption in many of these cases hinges on cloud providers understanding a potential consumer’s existing operational processes and making a best fit proposal for their new approach. It’s not easy work for sure, but no one will call it boring!

A patterns-based approach to middleware

September 24, 2010

Yesterday, I joined some colleagues on a webcast where we had a chance to talk to some of our users about the technical concepts behind WebSphere solutions for building, consuming, and integrating clouds. We discussed ways to enable autonomic, policy-driven runtimes, construct elastic, in-memory data grids, and integrate on-premise and off-premise applications. It was all fun stuff to talk about, and hopefully the webcast participants got a good look at how cloud computing is taking hold in WebSphere. Having said that, for me, one topic stood out above the rest and that topic was pattern-based middleware for the cloud.

Ready for an understatement? Configuring, and subsequently deploying enterprise middleware environments is not trivial. When I talk about enterprise middleware environments, I am referring to a collection of application infrastructure deployed in support of a particular business service or application. If you were not familiar with the makeup of these environments, the number of different components involved would likely surprise you. Consider a standard web application, one that, for all intensive purposes, is a simple CRUD application. It would not be at all surprising to find proxy servers, application servers, and database servers deployed in support of this seemingly simple web application. In all truth, these are just the basic components, and it would not be beyond the norm to see more complex integration solutions such as enterprise service buses in play.

Simply put, there is a lot of work and knowledge that goes into setting up middleware application infrastructure in support of business applications and services. Think about what we are asking of users. We are asking them to understand what components are necessary, and more importantly, we are asking them to understand how to make all of those components work together to achieve the desired result. It is a lot to ask, especially considering these users really just want to support the thing they are most interested in, their application.

Now, one of the big focuses of the cloud movement is making it easier to deploy these kinds of complex IT environments. In fact, if we look at advanced virtualization techniques commonly used in the cloud, we see that they make deploying complex units like application servers and databases a lot easier. Virtualized units encapsulate the installation, and in most cases, quite a bit of configuration for these components. Users simply step through some activation process for the virtualized unit, and voilà, they have their service.

Clearly, this approach makes things easier on the person whose job it is to deploy these services. They get their services deployed fast, and consistently, without having to do mundane install and basic configuration. However, there is still a missing link. In many cases, users still are responsible for the more complex inter-product integration work. They have to configure the servers to talk to the database, and they have to setup the web servers to route to their worker nodes among other things. Now, you may say, ‘Why not put the complete application environment in a single virtualized unit?’ After all, if that were the case you could not only install everything into the unit, but you could also capture the integration of the different components. While it sounds good, it is often impossible or undesirable to do for a number of different reasons. So, how can we make this integration work easier on the end user then? I say the answer is patterns-based middleware.

A patterns-based approach to middleware allows users to build or consume deployable units that represent more complete middleware application environments. Instead of having one unit for an application server, another for a web server, and yet another for a database, users have a single unit that represents all of these together.

The pattern, often times by utilizing the individual virtualized units, encapsulates not only the basic installation and configuration of each part in the environment, but it also represents the more complex integration steps required to make the environment meaningful. Ideally, these patterns also provide extension mechanisms to allow users to include their own resources as part of the deployable unit (think applications & their configuration). Put simply, patterns represent completely configured and integrated application environments, and they are an atomic, deployable unit for a virtualized or cloud environment.

I really believe the patterns-based approach to middleware tackles many of the consumability issues around application middleware environments. It removes the days of tracking down problems in your application environment that are nothing more than configuration/integration issues. In addition, one could envision a marketplace from a vendor or community that offered several different solutions-oriented patterns. This would be a huge advantage for those users who knew what they wanted to achieve (i.e. a highly available web application environment), but not necessarily what combination of components was necessary to achieve it.

There is a lot of promise in patterns-based middleware approaches. We have been talking about it for a while now with our users, and I suspect other providers are talking, or soon will be talking, about the same basic concept. What do you think about this approach to constructing and delivering application middleware environments?

Positioning for PaaS

August 6, 2010

One objective look at the current PaaS market provides all the evidence you need to conclude that we are in but the infancy of development for this technology. I want to be clear here, there are some really cool (and more importantly value-providing) offerings out there, but there is a long way to go. Specifically, I am not convinced anyone can make a case that we have solutions today that comprehensively address two concepts that will become mere table-stakes in the PaaS play space.

First, there is effective abstraction and commoditization of infrastructure. Users should have to go through considerable gymnastics to even see, much less configure, the underlying workhorses like the servers, storage, switches, routers, etc. Ideally, all users need to do is provide an application with service level policies. Providers can either meet them or not, but don’t force users to provide configuration information for these physical resources.

To be fair, in many cases I believe we have not seen more complete abstraction for cultural reasons. Many users simply are not yet comfortable without some level of control over the configuration of the physical resources at some level. At the very least, many are still concerned with defining the compute capability of a given server (i.e. the small/medium/large instance methodology), the speed of switches, etc. In this respect, users hold PaaS solutions back, but at some point, these systems have to forge ahead and provide assurance that they can determine the necessary infrastructure better than the user.

The second of the two basic concepts for PaaS is application management capabilities. One of the most talked about features in this category is application aware routing and distribution. Everyone wants this, many systems claim to provide it, but I am not convinced that they do so at a meaningful depth. For instance, take the oft-provided option of scaling a system up and down based on average application response time. It seems simple, but have you ever really thought about how a system provides this? Is the system measuring average application response time at only the web server tier, or is it measuring throughput at both the web server tier and the application container tier?

Certainly the latter is more difficult to achieve (typically it requires more intimate knowledge of the application container), but in the case the application is not meeting service level policies and it is time to scale up, having measurements at both tiers removes any ambiguity about which component should scale. I’ll leave it at this, but I am ignoring the fact that ‘measuring at the application container tier’ is vague. Depending on the application, you may want to measure throughput at an even finer-grained level in the application container (i.e. web container, EJB container, web services engine, etc.).

Of course, PaaS systems will have to go beyond intelligent routing and distribution when it comes to application management capabilities. Since the PaaS system becomes the control plane for managing and delivering applications, users will also expect things like application edition management. In other words, provide a means to roll out new versions of an application, gradually route incoming requests to the new version, and eventually take the old version out of service. Edition management is just one of many things users will need to effectively manage their applications, other things that come to mind are policies that proactively take action (other than scaling up or down) based on the relative health of an application, application-level usage data to enable charge back, a whole host of application related security services, and much more.

As I said earlier, there are certainly some compelling and valuable PaaS solutions on the market today. However, there do not seem to be many that provide comprehensive coverage of the two basic concepts mentioned above: effective infrastructure abstraction and robust application management capabilities. For the record, I think existing offerings cover the former capability far better than the latter. As with all things though, this is a point in time statement that providers will soon (over the next few years) address. The question is which providers are best suited to conquer the PaaS landscape?

There is no easy answer to that question. Can IaaS providers move up the stack and effectively provide PaaS? Well, if the industry pioneer in IaaS, Amazon, is the measuring stick, it is unclear. There is not much to indicate Amazon has a short-term interest beyond IaaS capabilities. While there has been a plethora of new development activity coming from Amazon, most of it seems to center around adding to an already robust IaaS offering. That is not to say we can count Amazon, or any other IaaS provider out of the PaaS race though. The most intriguing possibility is that an IaaS provider acquires or partners with a PaaS company already building on top of their infrastructure. Presumably, this would spur a tighter integration and possibly push along the evolution of PaaS capabilities.

What about application integration/middleware companies (such as my employer)? In general, I think these companies are in the opposite position of current IaaS providers. I believe they are in a good position to deliver on the various application management capabilities mentioned above. The challenge to these companies will come in effectively abstracting the underlying compute infrastructure. This means building a layer that can communicate with various underlying IaaS infrastructure, or alternatively, building their own IaaS infrastructure and layering on top of it. Neither of these is an easy proposition, and for many of these companies, either choice will mean treading on new ground.

To this point, I have not mentioned the obvious contender for the PaaS market: existing PaaS providers. I might concede these companies, owing to their early entry point, are in a good position to dominate the PaaS landscape in the future. However, many of the existing players have a significant ways to go in robust application management capabilities. It is unclear if they can catch up here, or if the path forward for many of these providers will be partnership with, or acquisition by, vendors that have significant (years and years) of experience in those types of capabilities.

The bottom line is I think the PaaS market is in store for a flurry of acquisitions and partnerships. In this respect, I believe the VMware/ was the spur the market needed. The question is who makes the next move and what is that move? I’ll be watching, what about you?

PaaS elasticity is harder to achieve than you think

July 22, 2010

As the ideas and concepts behind PaaS platforms continue to evolve, it is becoming quite clear that those platforms will consist of many facets. These systems will consist of application runtime services (both shared and dedicated), application modeling capabilities, and runtime monitoring to name but a very few. While all of these will be necessary to deliver a truly effective PaaS platform, if you ask the casual cloud follower what they identify as the key characteristic for PaaS, I would be willing to wager that 9 out of 10 say runtime elasticity.

At this point, consumers form an implicit link between cloud computing and elasticity. In other words, you cannot label something as a cloud if it does not deliver some layer of elasticity. PaaS platforms are no different in that users expect elasticity in the services they deliver, namely the application environment. While the end-user expectation is no different, achieving this result is much more involved once we move up the stack away from IaaS towards PaaS.

Elastic runtime application environments are not a new notion. There are existing solutions that enable dynamic request routing based on application performance goals and application importance relative to business goals. However, for the most part these systems work with a defined set of application infrastructure. In other words, users deploy and configure the application infrastructure, and then they hand it over to the management system. Over time, the management system makes use of the resources to which it has access in order to meet application and business goals as defined by users. This may mean starting up new application instances, moving those instances around, or even terminating some application instances.

The capability described above is definitely valuable. I mean, who wouldn’t want to align application performance with business goals, right? However, PaaS systems promise to go a step further and eliminate the need for a static pool of application infrastructure resources. PaaS platforms should not only manage application instances and placement, but also the placement and instances of the supporting application infrastructure. In this way, users achieve an even greater level of elasticity as they are not bound by the software supporting their software!

From the consumer point of view, this moves them toward a low-touch, autonomic, highly performing application runtime. For providers out there looking to deliver a PaaS solution, this implies quite a bit of complexity. As just a small sampling of what PaaS platforms will need to do in order to enable truly elastic application runtimes, consider the following:

– PaaS platforms must provide a means to configure application performance goals and relative importance. This means users should be able to configure application goals and metrics (i.e. response time, throughput, heap usage, etc.), in addition to establishing the importance of a given application meeting its goals relative to other applications. After all, customer-facing applications are likely to be more important than an internal employee directory application.

– PaaS platforms must be able to effectively monitor all aspects of the application runtime in relation to the policies established by the user. The system should do this without affecting the runtime and consuming resources otherwise needed by the applications.

– PaaS platforms must be able to scale up and scale down the application environment based on the metrics it gathers from monitoring activities. As I said above, this means that the system should be able to control both application infrastructure and application instances and placement. This is made more complicated by the fact that application infrastructure is made up of several distinct units (web servers, controllers, container nodes, databases, etc.), and the PaaS system must understand how scaling up and down each piece relates to overall application performance. In addition, it is likely to need this knowledge for a heterogeneous set (from a vendor makeup standpoint) of application infrastructure.

– Ideally, PaaS platforms will also be able to scale up and down the underlying compute infrastructure. If not, we end up with a greater degree of elasticity than what solutions offer today (because we can dynamically control both applications and application infrastructure), but we have really just pushed the static pool of resources down the stack. Rather than a static set of application infrastructure resources being the limiting factor, the underlying compute resources become the constraint.

Is this sort of unified, far-reaching resource management a requirement to achieve the promises of PaaS elasticity? Ultimately, I believe the answer is yes. In the short-term though, we will see emerging PaaS platforms that meet some of these aims, but certainly not all. As the industry matures, we will no doubt get to more robust and complete PaaS platforms. As a user, this means look for tactical advantages you can gain from PaaS platforms now (or in the short-term), and keep any eye on the direction of the market to determine how it may align with your longer-term strategic aims.

There’s more than one entry point for cloud and virtualization

June 30, 2010

If you are someone who works in the cloud computing and advanced virtualization sector day in and day out, you may have a perspective that makes it easy to forget that this movement is still very much in its infancy. Personally, from time-to-time, I find it easy to forget how early these technological shifts are. For me, that lapse is usually momentary, owing to a particular question I hear repeatedly from enterprises: “Where do we start with cloud and virtualization?”

A significant number of enterprises are in the early stages of formulating a strategy and execution plan to leverage the benefits of cloud computing and virtualization. An obvious part of that execution plan is establishing an entry point that is both consumable and valuable for the enterprise. For that reason, when enterprises ask me where to start, I turn it back around to them and ask, “What is your biggest pain point for service delivery and management?”

I make it a point to ask this question because it seems as if many players in the industry imply that the only starting point is at the infrastructure level. In other words, you should look for solutions that allow you to virtualize, manage, and govern compute infrastructure like servers, storage, and networking (whether this infrastructure is on-premise or off-premise is irrelevant to this discussion). This is a fine starting point… for some situations.

I suspect one can attribute the industry’s inclination to push users towards an infrastructure entry point to a few different factors:

1) Both on-premise and off-premise solutions are visible and well known (think VMware, Amazon EC2, IBM PowerVM, etc.)

2) This level typically requires little change. Essentially, it is a faster, more consistent, and more efficient way to provision the same services an enterprise already uses. The difference is the services live within some sort of virtualized container.

3) Under-utilization of servers is a widespread problem. Reports consistently put average server utilization between 7-12%. Virtualization equips users to easily drive up utilization.

While these may be good reasons for some to start at the infrastructure level, it certainly does not mean that an enterprise should only consider starting at this level. Consider the following scenario (one that is not at all far-fetched based on enterprises I talk to):

The biggest pain point for Enterprise A is in maintaining a highly performant production environment. The environments are fairly static, meaning that once deployed, the services tend to stay up and running for a long period. Enterprise A does not deploy new services into these environments often, and they average a reasonable server utilization rate north of 50% (actually comparatively this would be more than reasonable).

If Enterprise A were interested in cloud computing and looking for recommendations of where to start the journey, I certainly would not suggest the infrastructure level as a means to solve their most pressing pain point as stated above. Server utilization is respectable, and while faster deployment times may be nice, the fact that not many deployments are taking place means that this will not necessarily be a huge return on whatever the necessary investment may be.

If I were to suggest a starting point for Enterprise A on their cloud and virtualization journey, I would be inclined to point them toward solutions that provide a means to virtualize, manage, and govern applications (or ones that enable this for both infrastructure and applications from a single control plane) instead of the underlying infrastructure. Enterprise A stands to gain much more from a system that provides autonomic, policy-based management of application instances and application resources to ensure that their applications perform in accordance with the needs of the business.

I hope that focusing on such a narrowly-scoped scenario does not dilute my point in this post. I simply mean to say that if you and your enterprise are considering embarking on a cloud computing and/or virtualization journey, start by examining the pain points in your IT organization (especially around service delivery and management). Once you identify the pain points, prioritize them to understand which ones are the biggest inhibitors. Once you have that list, look to solutions that provide you the most bang for your buck. Oh, and remember, buck means acquisition and implementation costs. Do not get fooled by the price tag alone!

The fundamental value of cloud computing

June 10, 2010

In my mind, the cloud computing ‘sell’ is pretty straight forward. Fundamentally, cloud computing is a proposition of improving the time to value ratio for end users. Cloud solutions enable users to address their needs faster (and in some cases cheaper) than traditional approaches to the same issues. Typically, improving this ratio comes down to two things:

1)      Embedded expertise

2)      Operational prowess

Embedded expertise is the included awareness or intelligence delivered by a solution. In other words, it is the capability provided right out-of-the-box. This could be the ability for a given cloud solution to seamlessly integrate virtual machines with storage and networking, thereby relieving this arduous administration task. Alternatively, it could be delivering integrated and optimized application platform stacks such as those delivered by WebSphere CloudBurst. In any case, the user gets something for free. Okay, not exactly free, after all they are paying for the solution. However, the user gets capabilities without having to invest significant time and effort in building everything from the ground up.

Operational prowess is all about making existing tasks more efficient. This could be any task, but in the case of cloud computing it is perhaps easiest to see in the context of provisioning. Many cloud computing solutions focus on delivering IT services (servers, storage, application platforms, applications, etc) with unprecedented speed. Obviously, the time to value benefit here is crystal clear. Users get access to services that used to take days or weeks to provision, in what is typically a matter of minutes.

For many, this basic cloud computing value proposition and its two main components may be obvious. Personally, I came to the realization after talking with tons of end users about cloud-enabling application infrastructure stacks. Their expectations of a cloud solution point to the time-to-value proposition and the two main facets. As an example, here are some of the common expectations of a cloud solution in this space:

1)      Automate common administration tasks such as the creation of the application platform, the integration of the components, and the maintenance of the system. This goes back to embedded expertise.

2)      Get the environments up and running as fast as possible (read minutes). This is obviously operational prowess.

3)      Squeeze everything you can from the application environment and underlying infrastructure. This is more of a blend of both embedded expertise and operational prowess. The system needs to understand the best configuration for the application platform based on the user’s needs, and it should provide capabilities that allow it to adjust on the fly to make sure it is getting all of the resource it needs and no more.

Personally, it is nice to hear these kinds of things because it validates the thought behind and implementation of the WebSphere CloudBurst Appliance. The appliance offering provides both out-of-the-box intelligence and know-how (via shipped, integrated, and optimized virtual image offerings of WebSphere products and an inherent knowledge of the software inside those images), as well as operational prowess (via accelerated service delivery times and intelligent provisioning). On top of this, we can improve the time to value ratio even more when using both WebSphere CloudBurst and the Rational Automation Framework for WebSphere in tandem to deliver cloud-based WebSphere platforms. Simply put, we provide real and tangible value to our end users.

In general, those looking to either produce or consume cloud solutions need to really hone in on how to improve time to value. It sounds basic, but it is easy for both providers and consumers to distract themselves with ‘new and cool’ and take the focus off providing/consuming solutions that promise to accelerate the time to value curve.

Virtualization in the enterprise

May 10, 2010

I spent last week at the annual IBM IMPACT event, and had a lot of great conversation with enterprise users about some of the transformations taking place in the IT industry. Most of my conversations focused on virtualization and the broader cloud computing architectural shift, and many of these conversations reminded me of how early in this journey we still are. Many enterprise users are still very much in the “toes in the water” phase when it comes to cloud computing. Specifically, that phase is somewhere between assessing the business value and working out a prototype for the organization.

While it is clear that cloud computing is still steadily emerging, it is equally clear that enterprises have embraced virtualization whole-heartedly. In this sense, I am not talking about mainframe virtualization that enterprises have relied on for years. I am talking about higher value virtualization techniques that encompass application infrastructure and applications as the unit of work.

In some cases, these enterprise users invested quite a bit of time and resource to build up a stockpile of virtual images for use in the organization. In other cases, the users consume vendor-supplied virtualization artifacts. Either way, some common themes and questions pop up in just about all of these conversations:

–  How do I manage this new virtual environment?

  • One of the chief benefits of virtualization is that it enables the construction of meaningful environments in a very simple and rapid manner. Unfortunately, this benefit has a downside. Since it is so easy and fast to spin up environments, virtualization usually results in more deployed environments than ever. This can result in overburdened administrators who must attempt to discern which environments are currently in use, determine the licensing impacts of all in-use deployments, and much more. Users are keen for solutions that not only enable virtualization, but also provide management capabilities that make the enterprise-level use of such virtualization tenable.

– I like the idea of virtualization, but not at the expense of customization control.

  • You will not find many who dispute the technical merits and benefits of a virtualized approach. However, that does not mean they are willing to adopt the approach if it means sacrificing customization controls they have in their natively installed environments. Enterprise users must retain control of the software components within the virtual package. Solutions that enable customization to each piece of said package will prove more valuable than those that treat the image as a locked black box.

– I need choice.

  • This applies mostly to the scenario when vendors supply virtual packages to enterprise users. In this case, the vendor is shipping a pre-configured, pre-installed set of software that can sometimes encompass an entire software stack (i.e. OS through application middleware). In this case, users demand choice with respect to what kind and versions of software components ship in the package. From a vendor’s standpoint, there is no way they could ever ship enough permutations to meet all users’ needs, so the package needs to be sufficiently componentized to allow the exchange of one component version/type in favor of another. To accompany such componentization, users need tools that make the task of reconstructing the virtual package simple and streamlined.

– Remember that this approach cuts across multiple teams.

  • As nice as it is to have complete, functional software stacks in a virtual image, users constantly reminded me of the challenges they face to employ such a solution. Take for instance an image that encapsulates an application middleware stack, from the OS right up to the middleware. Typically, this means at least three distinct teams (operating system, middleware, and application) will be involved/interested in the package. This does not even take into account the infrastructure team that sets up the resources on which the virtual machines will eventually run. While most users do not see the challenge as insurmountable, it is something vendors need to embrace and take into account when designing and delivering solutions.

Virtualization is a fun space to watch evolve in the enterprise because it is going to be a key technical enabler of the overall cloud computing movement. The questions and discussion points I consistently came across last week only serve to reinforce the fact that enterprise activity and interest is real. The problems and challenges are also real, and vendors must work with enterprises to overcome these and make advanced virtualization part of the mainstream in enterprise IT.