As 2010 came to a close, many people were asking me about how they could move third party applications into the cloud… I’m guessing everyone was exhausting their 2010 budgets and purchasing software for the coming year. At the beginning of 2011 I am now hearing more from in-house developers wanting to know how to move their own first-party applications into the cloud.
Hosting your application within an Infrastructure as a Service provider grants you the operational agility to scale your infrastructure very quickly, but your application needs to scale with your infrastructure. One of the benefits of cloud computing is that you can rapidly grow your infrastructure from 1 to 100 servers within a moment’s notice, but applications must have some way of sharing work across these newly provisioned servers in order scale in any meaningful way.
From my experience here are some common ways applications can be designed so that they can scale horizontally:
Design Element #1: Shard, Segment and Spread Out
Making sure that you have a clearly defined separation of concerns between components can be crucial. A common example is the RDBMS and the re-occuring question: “where should I place this database?” Traditional thinking might be that a database server should be an immense chunk of metal – a 64-CPU box with 16TB of RAM. A single, mammoth database server that hosts everything may be less effective than several small database servers, each hosting a single schema. Six 2 CPU, 2 GB PostgreSQL servers may give you better performance than a single 16 CPU, 16 GB server. You also have a greater ability to scale each instance vertically; by putting each schema on its own server, we can add additional CPUs, RAM or even migrate to higher-performance storage just for that schema.
This doesn’t just apply to in-house software development, either. If every department in your organization wants to have access to a Drupal portal, but each department maintains documents separately, why not have a stand-alone Drupal server for each department? Each instance can be managed identically, tie back to a central authentication server and be scaled independently.
Design Element #2 – Know Your Enterprise Integration Patterns
This design consideration should be of paramount concern when first architecting your applications to grow at a cloud scale. The Enterprise Integration Patterns first envisioned by Gregor Hohpe and Bobby Woolf have become increasingly relevant as enterprises build applications that need to scale on demand and integrate a myriad of third-party systems.
Enterprise Integration Patterns are to service oriented architectures as the GoF Design Patterns were to software engineering. Pattern-driven development allows architects and developers design applications that are easier to describe, communicate and maintain by building upon common, re-occuring elements.
Patterns become very evident in cloud applications, especially when it comes to message routing. Service-oriented applications have to implement the “Message Translator” pattern frequently – and it makes sense to develop them using a set of shared best practices.
Even better, projects like Apache Camel have been established that give you an Enterprise Integration framework out of the box. No need to write “glue code” – instead you can re-use Camel’s integration patterns and stick to writing business logic!
Design Element #3 – Quality of Service Counts
It happens… a flood of traffic comes out of nowhere and your system simply can’t handle the load. Perhaps a service has gone completely down and now a ton of messages are backed up and waiting to be processed. There are moments where the system simply has more inbound traffic than it can handle, and you simply can’t respond to every inbound request.
There is an alternative to just shutting everything down. By enforcing a quality of service with your message production and consumption you can have diminished availability without becoming completely unavailable. Preventing services from being overwhelmed includes reducing the number of messages being handled, which can be done by defining a time-to-live on your messages. Older messages, ones that may not even be relevant anymore, are allowed to expire and are simply discarded. One can also limit the number of messages being interpreted at the same time by dialing down concurrent message consumption, perhaps limiting a service to only process 10 messages at a time.
Design Element #4 – Go Stateless
Maintaining the state is always the bane of a scalable application. Maintaining state often means persistence, persistence means storing your data in some central location, and a central data store is difficult to scale. Instead of having a large number of stateful or transactional endpoints a scalable application should have a RESTful nature (without being limited to HTTP).
If you can’t avoid state… and you seldom can entirely avoid state in some shape or form… manage state using the power of Enterprise Integration Patterns. Consider using the Claim Check pattern along with a tuple space such as GigaSpaces, JavaSpaces, Blitz or SemiSpace. The checkin/checkout style of tuple spaces paired with the Claim Check pattern allows for the very large amount of concurrency needed for transaction-oriented systems.
If your enterprise application is engineered for scale to massive volumes a managed cloud hosting strategy can let your software (and your organization’s cost) scale as well. With scalable design patterns, a cloud-ready app can bring in an increasing number of customers with a minimum of re-engingeering.