Introducing IBM Worklight v5

June 26, 2012

Not too long ago, I wrote a post about IBM’s then very recent acquisition of a mobile application platform company called Worklight. Given the fact that is has been a while (at least in mobile market time) and IBM recently released (6/15) the new IBM Worklight 5.0 product, I thought it was a good time to revisit the topic.

First things first, the acquisition of Worklight by IBM and the subsequent release of IBM Worklight does not change the nature of the solution. It is still a mobile application platform, and it still provides the end-to-end capabilities that you probably (or at least should) expect from such a solution. At a high level, those capabilities address the following needs:

- Application development and build: IBM Worklight provides a robust development environment called IBM Worklight Studio that allows you to quickly construct mobile applications for multiple mobile operating system platforms. The Worklight Studio does not constrain you in terms of mobile application architecture (native, hybrid, mobile web), nor in terms of the libraries (JavaScript and otherwise) that you choose to use in your mobile application. Furthermore, the development environment includes an integrated build engine that makes it easy for you to initiate builds to produce device-installable binaries from a central location.

- Application runtime: IBM Worklight provides mobile application runtime capabilities that span from the mobile device to the enterprise backend through the IBM Worklight Device Runtime and the IBM Worklight Server respectively. The device runtime provides a rich set of APIs that are cross-platform in nature and offer easy access to the services provided by the IBM Worklight Server. Additionally, the device runtime provides a key-value, encrypted cache that your mobile applications can use to store data locally (on the device) in a secure manner. This cache can be instrumental in enabling applications to function offline and in improving overall application performance.

On the backend, the IBM Worklight Server provides services such as data integration, data transformation, application security, device authentication, direct application update, auditing, and more. It is worth pointing out that the IBM Worklight Device Runtime and the IBM Worklight Server are engineered to work together. This ensures your mobile applications have easy access to the services provided on the server, and it means there is a mechanism to deliver value right out of the box. There is ample evidence of the value of this integration, from the ease through which you can integrate existing enterprise information systems with your mobile applications to the ability to directly and proactively update application assets that are running on a user’s device.

- Application management: Of course just as important as providing for the ability to develop, build, and run mobile applications is enabling you to effectively manage and administer said applications. IBM Worklight provides a web-based console that delivers an easy to use UI for leveraging some of the management features of the IBM Worklight Server. From the console you can manage multiple versions of an application across multiple platforms, send notification messages to application users, remotely disable specific application versions to prevent user access, and manage push notification sources and targets. Additionally, the console provides reports based on the audit data captured by the IBM Worklight Server. These reports provide important information about application usage such as application visits per day, total visits per application, newly detected devices per application, and more. Furthermore, the source data for these reports can be exported into your BI or analytics tool of choice.

I started off by mentioning the recent release of IBM Worklight v5, so I think it is appropriate to do a quick rundown of a few of the enhancements made in that recent release:

- WYSIWYG editor for quickly building mobile application UIs in a drag and drop fashion. The new Rich Page Editor can be used to quickly build mobile UIs using HTML5 and Dojo Mobile constructs. In no way does it limit the control you have over the underlying application source code, but it does make the construction of the views in your mobile application much simpler.

- Integration with IBM Cast Iron via an out of the box adapter. This adapter allows for bi-directional communication with IBM Cast Iron thereby delivering a rich set of application connectivity options right out of the box.

- Simulator for quickly testing mobile applications in a browser. The new Mobile Browser Simulator allows you to test your mobile application in a browser while simulating many different device form factors across a number of different platforms.

- Channel for easily sharing mobile applications during the development and test cycle. The new IBM Application Center makes it easy to share mobile applications amongst important stakeholders of a project while enabling those stakeholders to immediately provide feedback about the application.

- Much more. Don’t believe me? Check it out for yourself by visiting the developerWorks site for mobile development. On that page, you will find links to download the new IBM Worklight Developer Edition which provides the end-to-end capability of the platform installed as a set of Eclipse plugins. That’s right, the only thing required to install and get your hands on the product is an Eclipse environment! Furthermore, it is completely free to install and use on your local machine!

Okay, so this has gone on a little longer than I intended, but I wanted to wrap up by addressing one more thing. The entire mobile space is hot, and it is crowded with technology providers. A question that I often get asked from my clients is ‘Why IBM Worklight?’ While the answer always varies based on the unique scenario, it often comes back to a simple but profound statement: ‘IBM Worklight delivers a truly open approach for developing and running mobile applications.’ This is first encountered in the development environment where you can choose both the mobile application architecture and libraries that you want to use to do the job. It extends to the runtime by allowing you to quickly and easily integrate with a heterogeneous ecosystem of enterprise information systems. To be sure, there are many valuable and differentiating features and functions of the platform, and if you are so inclined I strongly encourage you to dive deeper into it (after all, it is free to try). That said, I do not think I can understate the value of a truly open platform!

PaaS deployment models

May 16, 2012

Rapid deployment capability is table stakes when we are talking about a PaaS solution. Every vendor touts it, and to be frank, every user simply expects it to be there. While I think it is interesting to talk about rapid deployment and perhaps compare speed of one solution to that of another, I think it is infinitely more interesting to talk about the mechanics of deployment for a particular solution. That is, I think the more interesting and important question is ‘What deployment style does a particular solution take?’

At a very high, black and white level, I think two primary deployment styles permeate the landscape of PaaS today: contained and referential. I want to compare each approach, but before that, let me use a few words to describe each style.

- Contained: In the contained deployment model, PaaS solutions deploy environments based on packages that contain most, if not all, of the desired configuration as well as the logic to apply that configuration. For instance, if a solution were to deploy a virtual image in the contained model, the virtual machine would have the necessary information and logic embedded to configure itself upon start up. It would not necessarily need to contact external systems or wait for instructions from other actors.

- Referential: In the referential deployment model, PaaS solutions deploy environments using a minimal base package. At some point during the deployment process, the deployed environment communicates with a third party in some fashion to procure the necessary configuration information. Going back to the example above, if a virtual image were deployed in the referential model, the virtual machine would start up and then communicate with a third party service (either by initiating a request or waiting for instructions). This third party service would send down the configuration information and instructions for the environment hosted within the virtual machine.

When comparing the two approaches, it is helpful to understand the advantages and drawbacks of each. A closer look at the contained model reveals an obvious benefit: speed. In this model, the deployment package contains most of what it will require in order to fully configure itself. It does not rely on contacting an external service and having to pull down the necessary binaries and configuration information.

This advantage comes with an obvious drawback: management burden. By building more and more into the deployment package, you are increasing the amount of content that must be maintained and updated in said package. While it is not a huge concern if you only have a handful of discrete packages, you may not be able to rely on that luxury. You may expect that after some amount of time, the number of permutations to support necessitate spending an inordinate amount of time updating deployment packages. If this is the case, you can easily end up in a situation where the benefits of rapid deployment are negated by increased administrative costs.

The referential approach avoids the above pitfall. In this model, the deployment package remains fairly skeletal. Instead of packing in all of the content like in the contained model, the deployment packages in the referential model know just enough to integrate with an external system to get the right configuration information (think Chef and Puppet). This means that you only need to update and maintain configuration data and configuration actions in a single location instead of in each and every deployment package. As the number of different required environments increase, this approach can mean a significant reduction in management burden.

There is a flip side to this coin of course. The referential approach typically results in longer deployment times – dependent on the time required to install and configure content for your environments of course. Since the deployment packages contain very little content at deploy-time, they must pull or otherwise receive that data at some point during the deployment. This may or may not be a big issue for your particular use case, but it is a potential drawback worth considering.

So which approach is better? It is my opinion, one derived from numerous user experiences, that there is no way to generalize the answer to that question. In cases where content is infrequently updated and the number of environmental permutations is fairly well constrained, the contained deployment model can be extremely effective and efficient. On the other hand, in cases where content is dynamic and ever-changing, the referential deployment model is a virtual requirement. From a user’s standpoint, I strongly suggest pursuing solutions that support both kinds of deployment models. Tools should not dictate your approach. Your requirements for a particular usage scenario should!

Beyond the walls of the enterprise

April 12, 2012

Most in IT understand that mobile applications are unique. In terms of development, delivery, and management things just are not quite the same as compared to your typical enterprise application. There are many reasons for these differences including new and different programming models and languages, different application packaging and delivery, different development and testing methodologies, and more. While all of these are interesting and have profound effects, in talking to enterprise IT shops there is one major difference that sticks out a little more than most: mobile applications live beyond the walls of the enterprise.

Enterprise IT is all about the application, or perhaps better put, it is all about the services offered up via applications. If you look at just about any instance of enterprise infrastructure technology, you can usually talk about how it helps one to develop or deploy or manage applications (sometimes all three). To be fair, the types of applications and their purposes may vary widely, but they typically share an important characteristic. That common thread is that these applications are usually deployed to and managed from a datacenter that is under the control of the enterprise or a third-party provider. This is usually not the case, or at least not the whole story, with mobile applications.

If you consider anything beyond a simple mobile application that is nothing more than launching the device browser and pointing to a ‘mobilized’ web page, the difference in mobile applications probably sticks out to you. The application is in the hands of the end-user. More specifically, all or some portion of the application is installed on the user’s device. Does this really matter? I contend that it does, and I believe it warrants a number of considerations for the enterprise when evaluating technology that helps to manage mobile applications. In my mind, this includes at least the following:

- Managing application versions: With mobile applications, the process of updating an application is not the same as with traditional applications. It is not a matter of only updating content deployed on some server within the management domain of the organization. In all likelihood, portions of the application are installed on the user’s device, so the enterprise must have some control over that aspect of the application. This entails many capabilities such as the ability to directly push application updates to user devices as well as the ability to disable applications and force application upgrades.

- Managing application access and distribution: This is especially pressing in the scenario where organizations are creating mobile applications for use within the enterprise. In this case, organizations need to be able to control user access to applications. For instance, an organization may want to restrict access to certain applications until a user protects their device with a suitably strong password. Additionally, enterprises need the flexibility to distribute applications to their internal users. As an example, a company may want to distribute a particular anti-virus application to all users that connect to the company intranet with their mobile devices. The management of application access and distribution will only increase in importance as the Bring Your Own Device (BYOD) trend continues.

- Managing application security: This actually seems to be one of the biggest concerns enterprises voice today in the mobile application realm. Any sort of mobile application technology must provide a means to secure all aspects of the environment. This means the enterprise needs mechanisms to secure application artifacts on the device, secure application data on the device, and secure mobile application access to existing enterprise information systems. Absent these measures, mobile applications present a huge vulnerability and expose an organization to a level of risk that is simply not acceptable for their brand.

These are just a few of the management considerations which I believe take on a slightly different context in the mobile application realm. But that is enough from me! What do you think? How is your organization addressing some of these unique mobile application needs? Connect with me @damrhein.

The Converged Application Container

March 30, 2012

Just about everything I do in my job points back to a construct that is a foundational part of the modern enterprise IT organization: the application server. Now, when you say ‘application server’, thoughts immediately wander to Java. More specifically, folks tend to think about servers that support traditional Java EE applications and rightfully so. Over the past ten years, the predominant type of application server has been of the Java EE variety, and the industry has experienced tons of innovation in this area with more undoubtedly yet to come. But is the term ‘application server’ only applicable when we are talking Java?

The one-word answer to that question is ‘no’. While the strong association exists today thanks to a widely adopted programming model (widely adopted for many good reasons I should say), I cannot help but wonder what associations we will make with the term ‘application server’ three, five, or ten years from now. I say this because I believe we are going to witness interesting changes in the application server space. The rise of cloud and mobile along with the persistence of scripting and functional programming languages will, in my opinion, force radical evolution and innovation in the application server. Most importantly, the application server will transform into a converged container for hosting all manner of application types.

That’s right. The traditional, Java-centric application server is going to morph into a container that is much more multi-purpose than it is today. Of course, this is not going to happen without the advent of some key functional enablers. While impossible to list or event predict all of them, here are a few that I believe will be important in this movement:

1) Programming language/model normalization services: Before the idea of a converged application container can proceed, there will have to be some sort of translation approach built into the system. This translation system will be responsible for normalizing the different programming languages and models used by applications within the server. The normalization will have to be to some sort of common language (perhaps Java) supported by the server, but the idea is that this action will be mostly transparent to the application.

2) Functional composability: Due to the need to support a diverse set of application types, a converged application container will actually possess a wide-ranging set of capabilities. In pulling in these capabilities, it is not a stretch to say that bloat could become a problem. This bloat could surface itself via conditions such as excessive memory usage or large disk footprints to name but a few. In order to avoid this, the application container must be composable. That is to say, the container must be able to dynamically enable features it needs based on the application types that it is supporting. It is interesting to think about the role that OSGi may play in supporting this kind of approach.

3) Context-driven management services: Just as the functionality of the container must be built via modular composition, the management services it offers up must be right-fit and contextually aware. The application container must be able to enable and deliver the right set of management capabilities based on the applications it is hosting. Managing a mobile application can be markedly different than managing a traditional Java EE enterprise application. Since the same container may be hosting either type of application, it really needs to be able to offer up management interfaces that are appropriate to either. Just as with the functional set of capabilities, it must do this without suffering from bloat. In short, management capabilities must also be composable, and they must be contextually aware.

The idea of a converged application container may seem pretty far-fetched to some, but I truly believe it will be an area ripe with innovation for the next three to five years. Albeit just a select few, the above capabilities will play an important role in making the promise of converged containers real. Enough from me though! I’m curious to know what you think about this whole idea. Reach out to me on Twitter @damrhein and let me know!

IBM acquires Worklight

February 29, 2012

How about we start this post off with some facts?

- Mobile data traffic exceeded voice traffic in 2010 (Wireless Industry News, August 26, 2010)

- Shipments of smartphones exceeded the shipment of PCs for the first time in 2011 (2011 Economist)

- Ten billion mobile connected devices are expected to be in use by 2020 (2011 Economist)

- 74% of surveyed CIOs indicated mobile capabilities were a top investment priority over the next three to five years (2011 IBM Global CIO Study)

As you may surmise from the above, the mobile computing space is hot. Companies are already doing mobile, and many have already or are looking to define their three to five year strategy. In that respect, this month’s acquisition of Worklight by my employer, IBM, is not at all surprising. Let’s take a little closer look at exactly what Worklight is and what it delivers.

A quick perusal of existing Worklight material provides us with the simplest explanation of the solution:

Worklight is an open, complete and advanced mobile application platform for HTML5, hybrid and native apps.

While I grant you that the above statement sounds like something right out of a product brochure that does not mean it is not accurate. We need to go a bit deeper than that though, and the best place to start is with an architectural diagram of the Worklight solution:

As you can see, the Worklight solution is made up of four primary components. These include the Worklight Studio, Worklight Server, Device Runtime, and Worklight Console. Let’s tackle each one of these in turn, starting with the Worklight Studio.

The Worklight Studio is first and foremost and Eclipse-based IDE. When installed, it augments your Eclipse runtime with a powerful set of tools focused on helping you to develop every aspect of your mobile enterprise applications. Worklight Studio embraces open web technology such as HTML5, CSS3, and JavaScript by proffering a model where developers start with a common, shared code base for their application. From that common code base, Worklight Studio makes it easy to create subcomponents of the main application that are optimized for specific platforms. This makes it quite simple to start with a common code base and build a deployable application for Android, Blackberry, iPhone, WinPhone, and more.

All of this is not to say that you cannot create rich mobile applications that access native device functionality. Worklight Studio includes the PhoneGap library that provides a device-agnostic JavaScript bridge to native device functionality such as the camera, accelerometer, and geolocation facilities. Furthermore, Worklight Studio provides native device SDK integration as well as the ability to develop applications that freely move between native and non-native screens.

Next up is the Worklight Server. First and foremost, the Worklight Server provides a central distribution point for your mobile applications. You deploy your mobile application assets to the Worklight Server, and you have a central point where you can manage application versions, push direct application updates, and handle application versioning, up to and including disabling old versions. Beyond these capabilities, the Worklight Server facilitates many mobile application security aspects. It provides the means to enforce secure connectivity from client devices, and it is capable of checking the authenticity of applications with which it is communicating.

There are a few other capabilities worth pointing out in regards to the Worklight Server. First, the Worklight Server plays host to what Worklight refers to as ‘adapters’.  Adapters are JavaScript code that provide connectivity from mobile applications to backend enterprise information systems such as REST-based services, web services, databases, or just about anything else to which you need to connect. Seeing as these adapters run in the Worklight Server (within a Rhino container), you have the means to secure these as is necessary for services that reach into your Enterprise Information Systems. Secondly, the Worklight Server delivers a unified push architecture that makes it simpler to send push notifications to applications running on a number of different client device types. Effectively, this unified push architecture serves to hide the complexities associated with pushing messages across the different mobile platforms and lets administrators focus on simply reaching their application users.

Next up in the breakdown of the big four is the Device Runtime component. Worklight provides a client-side shell within which your mobile applications run. This shell provides several features and qualities of service, starting with cross-platform compatibility. The shell ensures that your applications have ready access to JavaScript bridges that enable accessing native device capabilities (PhoneGap) and that integrate with native display capabilities like tabs, badges, etc. (part of the Worklight client API). Another important part of the Device Runtime is the ability to create an encrypted, client-side cache. The Worklight shell extends the concept of local storage in HTML5, to provide a secure manner with which to store application data that can later be retrieved to avoid unnecessary service calls or support offline access.

In addition to all of this, the Worklight client API that is part of the Device Runtime component provides integration capabilities with the Worklight Server to enforce user authentication, check network connectivity, log user actions for reporting and analytics, integrate with the push notification capability of the Worklight Server, and much more. Finally, the Device Runtime enables the unique notion of skins. Skins allow you to apply different views for mobile applications that run on different device types within the same device family (e.g. iPhone and iPad). This means that you can reuse nearly all application artifacts across a wide array of devices in the same family, thereby significantly reducing development costs.

Our brief and I mean very brief, overview of the Worklight solution concludes with a look at the Worklight Console.  The console provides a web user interface through which you can manage many aspects of your mobile applications. First, you can manage application versions and easily take advantage of the capability provided by the Worklight Server to disable application versions. Through the console you can also manage push notifications to various applications across various devices, thereby taking advantage of the unified push architecture provided by the server. Finally, the console provides a central view of reports on mobile application usage and activity per the out-of-the-box statistics provided by Worklight as well as what your mobile applications report via the Worklight client API.

I hope this gives you a good, if not high-level overview of the Worklight solution that is new to the IBM family. I want to reiterate that this is in no means an exhaustive explanation, but more of a primer. I am sure I will be writing more on this topic in the coming weeks and months, but until then I hope to hear from you. If you have questions, let me know right here or on Twitter @damrhein. Until next time!

Application services in the PaaS ecosystem

February 5, 2012

One of the first things I learned when I started talking with clients about their plans for platform-based clouds is that there is no such thing as a ‘simple application environment.’ Even for the most basic, CRUD-style applications, you can count on there being numerous different components involved in the overall solution. While one may immediately identify application servers, web servers, and databases, it quickly becomes clear that LDAP servers, identify management components, data grids, load balancers, and more are equally important. Indeed, the enterprise application ecosystem is not one where the word ‘simple’ is often tossed about.

The inherent complexity of enterprise application environments has had, and will continue to have a significant impact on the design and implementation of many PaaS solutions. In fact, one could argue that the preeminent design point in many PaaS solutions on the market today is rooted in eliminating this complexity. To do this, PaaS solutions attempt to abstract all the way up to the application level. In other words, users supply application artifacts and possibly define application characteristics. From there, it is the job of the PaaS system to handle the underlying application infrastructure, completely absolving the deployer from having to get into the nitty-gritty of the application serving environment.

There is little arguing that this is a notable goal, but the fact is this level of abstraction is not an easy thing for PaaS providers to implement. PaaS solutions have to make a lot of assumptions about what components should be a part of the cloud-based environment as well as how those components should be configured. The bottom line is providers can only do this for so many components, and with that limitation in mind, they often go after the most common pieces of an application environment. That means things like application servers, databases, messaging engines, and authentication systems are often covered directly by the PaaS solution. While those components are definitely a big part of the overall environment, they are far from being everything that is necessary.

As I mentioned earlier, an enterprise application depends on a number of things beyond the core infrastructure in order to function correctly. Many applications need to cache results, depend on identity management capabilities, require load balancing to scale properly, and integrate with ESBs to name but a few things. PaaS solutions may not be able to address everything in terms of what they can deploy and configure, but that does not mean that they can blissfully ignore the large number of services on which applications typically depend. PaaS solutions must address the need to integrate the application environments it deploys with these application services in some kind of way.

There are at least a couple of ways to do this in a way that minimizes the burden on the PaaS provider. First, a PaaS solution can provide the concept of an application service and enable one to deploy an instance of that service. Of course, we are adding to the deployment and configuration responsibility of the PaaS provider, but this is somewhat mitigated by the fact that it is handled outside the context of any one application instance. The PaaS solution can simply deploy and minimally configure a service like an ESB, and then push back further configuration duties on the application environment that needs to access the service. It is the job of the PaaS solution to slice up the service such that multiple application environments can make use of it, but it is the job of each application to configure its slice of the service for use.

Alternatively, a PaaS solution can introduce the notion of an external application service. In this scenario, the PaaS system does not need to provide any capability in terms of deploying or configuring the application service. All it needs to do is provide a way for application environments to reference the external service.  This service has already been installed and largely configured, and applications deployed by the PaaS solution just need to indicate a dependency on the service. It is a bonus if the PaaS solution can manage to slice up shares to the external service, but that too could be pushed off on a component or actor external to the PaaS platform. To the PaaS provider, this is clearly the easiest of the two approaches to implement, but it does put more burden on the user to make sure those external services are up and accessible for applications deployed by the PaaS system.

So, which approach is better? While it is tempting to say the first approach is the best that is not necessarily true. It is true that it is the simplest for the user that is deploying applications with service dependencies, but it might not always be the best option for the enterprise. In many cases, enterprises have these application services up and running and they want to leverage that investment from the environments created by their PaaS platform. In that scenario, the second option is probably the preferable approach. To put it simply, there is no way to generalize and pick one approach over the other. The way in which a PaaS solution should handle application service dependencies varies, and ideally, it should be able to handle either style. What is clear though is the fact that there must be some way for PaaS systems to provide or integrate with commonly used application services!

It is what’s on the inside that counts

January 16, 2012

“It is what is on the inside that counts.” How many times have we all been told that? Personally, I can’t even remember. However, just because it is perhaps overused and undoubtedly cliché, does not make it any less true. In fact, I rank it right up there with the golden rule as mantras that one should try to live by. But this is ostensibly a cloud computing blog, so what am I rambling on about? Well, it turns out that it’s what is on the side that counts in cloud computing as well!

To explain this, let’s take a look at the process for deploying applications on the cloud today by looking at an example. Consider that I have a basic Java web application that provides an enterprise resource scheduling service. In the normal course of operation, the web application connects to both message queues and databases to service requests. Additionally, a web server usually sits in front of this application and directs incoming requests from clients.

If I want to deploy an application like the above on a cloud, I need a good deal of knowledge about the application. First, I have to know that in order for the application to work I need far more than just an application container. I need a way to deploy or integrate with a web server, message queue, and database. I also need to know how each of the components should be integrated in order to support the needs of the application. Moreover, I need to know quite a bit about the non-functional requirements of the application (security, availability, scalability, etc.), so that I can configure each of the supporting components as appropriate.

While this may not sound like too much to expect, it can be challenging considering that the deployer of an application is unlikely to have insight into each of these needs. In fact, there is no one person in the organization that is likely to have all of these answers. Application deployments are typically the result of significant collaboration that can stretch project timelines beyond what anyone expected. But given the push towards autonomics and automation that cloud is a large part of driving, I have to ask if such intimate knowledge of an application is really required to deploy it?

My honest, yet cautious answer to this has to be no. I mean, why should I have to locate all of the required components for my application before being able to successfully deploying it? Ideally, if I am using an application-oriented cloud management system like I would expect to find in PaaS, it should be capable of application introspection to make many of these decisions for me. If you go back to my earlier example, the PaaS solution should be able to look into my web application and determine via application metadata, or even source code, that there is a dependency on both a database and a message queue. Additionally, if it notices that there is service addressable via HTTP, I would expect to at least be prompted to see if I wanted to include a web server in the application deployment. Ultimately, the introspection of the application should result in a proposed deployment architecture for my application that includes both the necessary components and the required integration between those components.

On top of all of this, I would expect that a number of non-functional configuration items could be determined via application introspection as well. If metadata in my web application indicates that I am using basic authentication to protect my enterprise scheduling service, the PaaS solution should be able to configure the application container environment appropriately. Even more interesting would be the ability to specify metadata in my web application that indicated scaling requirements and have the cloud management system automatically act on that!

Is this really too much to expect? I do not believe so. Many talk about the very nature of PaaS being one rooted in application orientation. Well, I think you can only get so far down that road without putting forth solutions that possess an inherent awareness of the applications that are being deployed. In other words, it is what is on the inside of the application that counts and PaaS solutions need to know that!

It’s a bottom up world

December 18, 2011

I was talking to some colleagues the other day about the happenings in the world of cloud. Specifically, we were talking about a few different enterprises we worked with that had recently adopted cloud computing. As we talked, we seemed to hone in on the adoption path. Specifically, what group within the companies we worked with had been most responsible for the cloud adoption decision? Was it a group typically responsible for enterprise solution adoption, or were we seeing a fundamental change in how these types of companies made purchases? As we began to unravel the thread that led to the choosing of the particular cloud solution in each case, everything pointed back to a common group: the developers.

I am sure many of you read that, and think, ‘No kidding. Developers are the decision makers now.’ While I agree that it has been clear for some time now that developers play a significant role in the usage and evolution of cloud computing, that is a completely separate notion than saying developers are leading adoption decisions regarding cloud computing. That said, I believe the evidence is incontrovertible and can be found far beyond the handful of use cases that me and my colleagues were discussing. I would wager that if you asked ten companies that were working with cloud computing what group was the strongest influencer in the adoption of their cloud solution of choice, eight would say developers. It is simply the way of the cloud computing world. With that in mind, let’s examine some cloud solution characteristics that are important if service providers want to engage enterprise developers.

The first and perhaps most important aspect in delivering a developer friendly cloud solution is quite simple. It’s the APIs and interfaces! I don’t care what kind of cloud solution you are delivering or where it is classified in the delivery spectrum (IaaS, PaaS, and SaaS), you better make your solution ‘callable.’ This is extremely important for developers because it enables so many things. It allows them to automate the use of the cloud solution. It allows their tools to communicate with the cloud solution. It allows them to mix in the cloud solution with their existing development processes, such as continuous integration testing of their application. Without APIs and interfaces, your cloud solution won’t attract many developers and ultimately it won’t attract many users.

In addition to APIs and interfaces, consumability is another important aspect of developer-friendly cloud solutions. Now, you may say that consumability is important regardless of the group in question, and I would agree. However, I believe it is even more of an imperative with developers. I mean, have you tried being an enterprise developer lately? You have to keep up with existing and emerging programming models, evolving application runtimes, new development tooling and more. Further, you have to understand all of these capabilities in the context of the business. After all, it does no good to know all of this if you cannot put it to work for your company. Therefore, I would encourage cloud service providers to keep it as simple as possible. Complexity should be the exception and not the rule!

Finally, for a cloud solution to attract a development following, it must be largely open and interoperable. If you follow the buzz around cloud, you probably hear many analysts and other independent observers call for openness to assuage any enterprise concerns about vendor lock-in. Well, developers are part of the collective enterprise, and they are not immune to the concerns of lock-in. An impressively sharp group of people, developers know that what’s ‘now’ is not what is necessarily ‘forever.’  They would rather know that the time and effort they are putting into learning how to leverage a particular cloud solution will not be wasted if their company decides to steer towards a different direction. Open cloud solutions and especially open cloud application platforms are incredibly important in ensuring developers that the work they are doing now will translate quite well to other cloud platforms. Make sure you approach is one that embodies the notion of openness if you hope to pull in developers.

For many of you, I am sure all of the above is common knowledge at this point. For those of you that have been involved in the cloud wave for a while, you know that developers have been the driving force for some time. I also believe that this signals a need for a significant shift in direction for some companies that wish to be notable cloud service providers. We are living in an environment that is increasingly driven from the bottom up. You either adapt to this new order, or you risk seeing your long-term viability threatened!

The pitfalls of cloud computing

December 6, 2011

There is no shortage of information concerning the ‘right’ way to do cloud. Don’t believe me? Just google ‘adopting cloud computing’ or ‘implementing cloud computing’ and prepare to be bombarded with advice. Don’t get me wrong. I am not attempting to impugn this kind of material or otherwise demean it. I am simply saying that if you want to read about how to successfully adopt, implement, or otherwise begin to use cloud computing, you have a wealth of information at your disposal.

On the other hand, I doubt you will find as much information on the opposite of cloud success, which is of course, cloud failure. This is to be expected. I mean, we all want to focus on the positives right? Yet, at the same time, it is a little odd since I happen to believe that we learn quite a bit from failure. To me, there is no better lesson than one learned the hard way. With that in mind, allow me to share some first-hand hard lessons from adopting the cloud.

- If you can’t standardize, you will fall short: A key tenant of cloud computing is automation. More to the point, cloud gives users the opportunity to completely automate the installation, deployment, and configuration of their application environments. That said it is impossible to achieve high levels of automation without a fairly high degree of standardization. For automation to work there has to be well-known bounds within which environments will reside. If you cannot drive a culture of standardization (one where not every request is a ‘custom’ request), then you will find automation to be fairly difficult. Without the clear benefits of automation, many in your organization may be left scratching their heads about the real value of cloud.

- If change is taboo, look out: Einstein is well-known to have said that ‘Insanity is doing the same thing, over and over again, but expecting different results.’ In this context, I am consistently amazed by users that want to adopt cloud computing, but are EXTREMELY hesitant to change anything about what they do now. Guess what? This rarely works. If you want to start leveraging cloud concepts, you must be ready for change. This may be change to processes, organization, culture, or any number of things. If you can’t or won’t change, then you will struggle with cloud computing.

- No integration means no joy: I am of the opinion that the best cloud approach is one that does not subscribe to the ‘one tool to rule them all’ mantra. Instead, I encourage users to consider the tools best-suited to tackle their most valuable use cases, while keeping in mind the integration capabilities of the various tools. If you find yourself looking at a tool that provides no means for integration into various other points within your enterprise, do yourself a favor and just stop! Nothing good can come of an impenetrable black box.

- Piece-meal automation will fall short: Not to belabor the point, but automation is a pretty big deal when it comes to cloud computing. After all, it is a big reason why cloud promises both speed and consistency of service delivery. However, if you find yourself looking into a solution or approach that means adding just a little automation here and there to an otherwise heavily user-driven approach, turn and run! Sprinkling automation around like pixy dust rarely results in magic. In fact, it usually turns out to be counter-productive. To be clear, I am not saying that everything has to be 100% automated for cloud to work. That is not realistic in the least, but wedging a little automation in between mostly manual processes is not helpful either.

I realize that the above may make you think that I am not an overly positive person, especially when it comes to cloud adoption. That could not be further from the truth. I would say that I am cautiously optimistic and eager to understand both what does and does not work. Luckily enough, I have been part of more than a few cloud adoption projects, and I have seen both the good and the bad. I hope that passing along some of my view points is helpful, and I am always eager to learn more from my readers. Feel free to pass along your own experiences and feedback!

PaaS and the application server

November 10, 2011

For the last 5+ years, I have been deeply focused in the area of middleware application servers and the container services that they provide. If you go a bit further back and look at the arc of progression in this space, you will see multiple pivots of innovation for these servers. In the beginning, it was all about transactions, and then JEE was added, followed closely (and probably most recently) by a push around SOA. While none of these topics have gotten less important over time, they become an assumed part of application server runtimes and architectures.  That is, customers will not consider an application platform that does not at least support those technologies and paradigms.

Does that mean that the application server space is dying down in terms of innovation and change? Not at all in my opinion. Just like JEE was there to push transactions and SOA was there to extend the innovation around JEE, a host of other hotspots are appearing that are demanding inventive changes in the application server space. One of those is the notion that an application server in and of itself is not the unit of purchase and use any longer. Instead, users want entire platforms that not only provide a robust container for their applications, but also surround those applications with other capabilities and qualities of service. This is definitely an interesting movement, and one that is leading to advances in the state of the art concerning elastic caching, policy-based application management, plugin driven architectures, and more. As intriguing as this notion is, I actually think there is something that will soon push traditional application server runtimes even further: PaaS.

A cursory look at the PaaS industry will quickly reveal that we are at the forefront — I mean the very forefront — of a potentially thunderous shift in the enterprise application space. Initially there has been a large focus on making it fast and simple to provision and manage applications on top of traditional middleware stacks. Of course this cannot and will not be the focus forever. At some point, the gaze of the industry will shift, and we will start to fundamentally rethink what that middleware stack is, what services it delivers, and how it functions. What does this mean? Well, I am sure that many of you have your own ideas, but allow me to offer up a few of mine as food for thought:

- Application servers and the kernel blend: There has been a lot of talk about the commoditization of the operating system, but I can tell you from first-hand experience that much of that has been premature. That said, as we move to more PaaS-like models, the OS and basic application server will blend together much more. Application servers will be extremely lightweight, and provide basic container support for your applications. Beyond that, I believe we will see a pull model based on need… which leads me to my next thought.

- Application servers become composable: In the cloud we want speed. Well, with traditional application servers, you can only push the limits so far. Sure, you can pre-install, pre-configure, etc., but you still have to deal with the unique configuration for each instance. In some cases this takes a while simply because there are tons of features to configure. What’s more is that some of these features are not actively used. In this light, I think PaaS will force application servers to take on more of a fit-for-purpose model. Features and components will be pulled in when they are needed and not a moment before. Further, those same features will be turned off when they are not actively consumed. This has benefits beyond speed of course, namely a consumption footprint more appropriately aligned with use.

- Application virtualization becomes a core competency: We can have esoteric arguments about whether cloud computing requires virtualization, but what is the point? Virtualization is elemental to nearly all cloud-based solutions (especially at the IaaS/PaaS layer). That said, most of the virtualization innovation has been focused on server virtualization and management. Going forward, those same concepts will be pushed up to the application. The capability to create virtualized pools of application instances, supporting extremely high densities and reducing management burdens of large environments will become the norm. In fact, those capabilities will become a core part of the application container.

These are but a few of the innovation areas I anticipate that PaaS will drive into the middleware application server space. You may think these are wildly off base, and you may have your own ideas about what PaaS is going to drive into this area. Whether you agree or vehemently disagree, I would like to hear what you have to say!


Follow

Get every new post delivered to your Inbox.