Reading Time: 3 minutes

By Doug Garaux, Architect, Cloud & Data Center Consulting, Logicalis US

The Problem That Won’t Go Away

Here’s a question that send shivers down the spines of many CTOs: “How much money do you have wrapped up in unused capacity?”

 

Because, well, you know the drill. You buy a server with 16 CPUs and use eight of them. But then your next initiative needs 10 more, so you buy another server with another 16 CPUs. Suddenly you have 14 CPUs rendered practically useless—flotsam capacity that’s valuable if you could use it cumulatively, but too small to be of much use when locked away on individual servers.

You’ll recall that virtualization was supposed to fix this problem. It certainly helped. But virtualization also turned out to be a double-edged sword: the freedom to instantly provision leads to an ever-increasing number of environments, and they can quickly spiral out of control. Which results in large amounts of capacity doing nothing but collecting cobwebs. Or performance issues going unseen, either lost in the shuffle or buried beneath complexities. Both of these issues can incur significant costs.

After virtualization, cloud came into the picture. People thought, “If I just turn to a cloud service, my utilization issues won’t hurt as much.”  It’s easy to understand why: hyperscale economics mean that cloud resources are much, much cheaper than building and operating your own infrastructure.

Unfortunately, cloud did not result in the freedom to use public cloud resources without ongoing maintenance and still retain cost-efficiency. The monthly costs, gone unchecked, can result in grossly-inflated IT expenses, often as the result of purchased capacity just sitting there idly. Plus, the same configuration issues that plague on-premise VMs can apply to cloud ones, too.

(Keep in mind: public cloud offering such as Microsoft Azure and Amazon Web Services make it very easy to see how much capacity you’re paying for, but very difficult to see how much you’re actually using.)

 

The bottom line is this: innovations like cloud and virtualization purported to fix resource utilization, or at least make it an irrelevant concern. They didn’t. And for those chasing the software-defined data center (SDDC) model—the ones for whom efficiency is paramount—that’s a problem.

After all, even if you unite on-premise and cloud environments into one, what good is it if you’re neither able to see what you’re truly using across the board nor right-size capacity as needs change? What good is unparalleled agility if the sheer sprawl of your capacity leaves you bleeding dry from unmanageable, unknowable month-to-month costs? How do you get savings from scalability if you’re flying blind?

 

IT Introspection

 

Our advice for efficiency-seeking CTOs: take a long hard look at your IT resources before you make moves towards software-defined data center and service-defined enterprise. Because software-defined data center alone doesn’t fix the resource utilization problem, either.

In fact, only your IT department can do that.

Luckily, tools such as the VMware vSphere Optimization Assessment make it easy, and barely require you to lift a finger. These tools capture data in the background for a predetermined length of time, making note of capacity that’s going unused, identifying the root cause of performance issues, and pinpointing any misconfigurations.

After the assessment period is over, these tools provide you with detailed analytic reports on where you have room for improvement—like VMs lost deep in the development abyss, or a misconfigured server in far-off data center that’s the root cause of costly performance drops.

These assessments commonly help many IT organizations unearth unused capacity and hidden configuration issues. Many CTOs find that bringing in an additional layer of IT expertise helps them make the most of the tool’s outputs and find the smartest way to alleviate any root-cause problems that are revealed.

Armed with deep insight into their resources, IT departments are able to use the results of these tools to realize significant gains. To give you an idea, here are some benefits generated with the VMware vSphere Optimization Assessment:,

  • 4x return on investment
  • 30% more visibility into IT resources and usage
  • 53% lower IT costs, thanks to fewer issues and better utilization
  • 54% less downtime of business-critical Tier 1 applications
  • 67% more IT productivity, via less time spent dealing with configuration-related issues

Getting a firm grip on your resource utilization is a critical step towards software-defined data center. If you make the move with a lot of question marks around your actual usage, you run the risk of making poorly-informed decisions or creating an even bigger mess as virtualization spreads across your entire data center.

Since it’s unlikely you’ll be entering SDDC with no virtualization or no cloud assets, it’s important to fully know the status of your resource utilization from the get-go. Because that way you can ensure cost efficiency from the very beginning—and avoid something like a massive grab of unnecessary storage capacity right out of the gate.
Chances are, there are well-hidden resource utilization and configuration issues lurking within your infrastructure. We’d be happy to help you find the right tool to uncover them—and build an IT strategy to capitalize on any revealed opportunities and implement solutions as an ongoing part of your SDDC strategy to achieve peak utilization and cost savings in your environment. Get in touch today.