Navigating Shifting Demand: The Next New Thing and its Impact on Data Center Real Estate

By Bill Dougherty, Executive Vice President, CBRE Data Center Solutions

Bill Dougherty, Executive Vice President, CBRE Data Center Solutions

It seems we are constantly at a “very interesting” point in the data center real estate industry. The problem with always being at an interesting point is that we become desensitized to fundamental changes. Here is why I think we should be paying attention.

Demand is strong and will continue to gain momentum for the foreseeable future. When we say demand, we aren’t talking about demand for wholesale data center space.We’re talking about the need to store, manipulate and derive intelligence from the overwhelming amount of data we are creating. According to experts, enterprises are going to store an additional 25 zettabytes of data over the next five years. I have a hard time comprehending what that means. It is25 trillion gigabytes. If we store all of that on one of the top cloud services at the cheapest rate, our monthly bill would be $175 billion, roughly equal to the monthly GDP of India. It would take almost 8,000 years over a 100 GB/sec fiber optic cable to move all that data to a different cloud provider andwould require about 1 billion square feet of data center space and 250,000 megawatts (MW).

And that’s just for the storage. We need the computing power to manipulate, analyze and create intelligence. We can’t begin to estimate what that demand equation looks like, but we know it will be substantial.

The question remains: how are we going to address the supply side of the equation? Just 10 years ago every enterprise would build its own data center. A committee would form around a table and dream of building the next Tier V data center and create spreadsheets with annual 20 percent compounded growth rates. We ended up with 10-MW data centers on 200-acre sites that cost $300 million dollars with one or two MW of actual IT load. We have proven that building a data center once a decade without optimization of supply chain, massive modularity, scale and with uber-resiliency is tremendously wasteful.

"Demand is strong and will continue to gain momentum for the foreseeable future"

The entrance of the wholesale data center in 2006 changed the paradigm. Companies could now rent space by the kilowatt (kW) and those 30-year capital decisions turned into 10-year or fewer commitments. Equally important, the break/fix responsibility and facility maintenance could be shifted to a third party. Everybody wins and returns are double digit. An abundance of capital flows into a new real estate asset class. In the early years demand is volatile but new social media companies balance the equation. Gradually, the most conservative companies buy into outsourcing the real estate part of their data centers.

Fast forward 10 years. We have now come off the best two-year period in wholesale data center industry history. All markets report record positive absorption. More capital and capacity has been delivered than ever. What is the concern? The concern is who is taking the space and why.

When you start looking inside the wholesale data suites you see that most customers use much less than 50 percent of their contracted power capacity, and many of those servers sucking up energy perform little work. Since more capital is still spent on the servers inside the data center than the infrastructure of the data center, and since those servers need to be “refreshed” often, the fiscal waste is a problem. For many, the solution will be outsourcing those storage and compute needs.

At the same time we see the first wholesale data centers emerge, some really smart folks start offering infrastructure as a service (IaaS). Four of the five most valuable companies in the Fortune 500offer IaaS. These companies are the cornerstones of the public cloud.

Five years ago, most enterprise customers said they would never use public cloud. Now,all of those companies have a hybrid cloud strategy. Even the largest cloud application-as-a-service companies have a hybrid strategy usingpublic cloud services as part of their storage and compute delivery. Many startups have never known anything other than the public cloud. Demand that would have gone to wholesale data centers is going to data centers leased or owned by the public cloud companies faster than was anticipated.

Using these public cloud companies are responsible for most of the wholesale data center absorption, and their pace has only accelerated. So why worry? Because these companies would rather not lease wholesale data center space, and they will eventually build their own. The rationale for leasing wholesale data center space includes economies of scale, access to capital, return-on-capital, cheaper delivery costs, better uptime and speed-to-market.However, only the last factor appeals to these companies. These companies have better access to cheaper capital, they can create lease structures at 30 percent of the cost of any wholesale operator, and their application architecture requires much lower resiliency. They don’t really care about who the operator is, only that they meet their schedule at the lowest possible cost. Once they solve their delivery conundrum, they should largely exit the wholesale data center market.

This shift is predominantly going to affect wholesale data center operators.Over the next 12 months, wholesale operators will need to decide how aggressively they build for future demand. Under-build and you miss huge transaction opportunities; over-build and you risk exposure when demand from cloud companies declines. To replace this, there will be substantial new demand from enterprise customers who need to replace their obsolete data centers. Even if all of these data centers migrate to a hybrid cloud solution, the non-public cloud portion requires considerable wholesale data center space. We are also certain to see demand from new technologies that will require huge computing footprints to extract value out of all that data we are going to store. The most firm conclusion we can make is that as soon as we think we have it figured out, the next new thing will come along.The next new thing will need more storage and compute than anyone anticipated, and we will need to adapt once again.