Reading Time: < 1 minute

Guest post by Steve Pelletier, Cloud Solution Architect

One of the great promises of cloud computing is the ability to auto-magically consume extra capacity from the public cloud when your applications need it.  This capability would allow organizations to build their data centers with enough capacity to handle day-to-day computing needs and pay a cloud provider for extra capacity only when they needed it.  This is much more cost effective than building the data center for peak workload, as most traditional data centers are built today.

There are several challenges with bursting to the cloud.  The two most prevalent are keeping data close to the computing resource that is working on it, and coordination of network resources.  Let’s use the example of a typical three-tiered application consisting of a web server, an application server and a database server.  If the web servers are getting overwhelmed, it would be nice to simply have some extra web servers created in the public cloud to augment the existing servers.  The challenges with this are keeping good response time when the new cloud-based servers need to talk to the application tier back at the main data center over a WAN link, and dynamically adding and removing the new web servers in the load balancing pools as they are created and taken down.

There are several ways to address these issues.  One is to locate the application in the same data center as the cloud provider, another is to re-design the application to be more “cloud friendly.”

Many companies are making these changes – is this Ripe, and the efficient thing to do, or is this just Hype?

[poll id=”122″]