Data Center Efficiency – We’re Done Here!

27 Jun

were done hereIn 2002, vmware server virtualization began making inroads in the data center helping customers reduce the number of physical servers they deployed. During the same time period as the introduction of vmware, there began to be little one off efforts to improve the efficiency of data centers, but there wasn’t a group or common metric involved.  Then in 2006, The Green Grid (TGG) was formed and shortly thereafter, they released the new data center metric called PUE (Power Usage Effectiveness). Subsequently TGG released other useful metrics like WUE and CUE, along with the Data Center Maturity Model.  The EU created the Data Center Code of Conduct and ASHRAE began loosening the standard for humidity and temperature ranges in the data center. There has also been a boom in the use of outside air which had a direct impact on reducing the energy use of one of the biggest users (HVAC).  All of the above and many more data center facility innovations occurred between 2002 and 2012. So certainly now that it’s 2014, 12 years after vmware server was introduced it must be time to claim “We’re Done Here!”?

We’re NOT Done, Not Even Close

In an article published by DRT on July 15, 2013, they discussed the results of a PUE survey of over 300 data centers. The survey results indicated an average PUE of 2.9. Yet in 2011 the average PUE reported by Uptime was 1.89. In another Uptime related survey, there is indication that our efforts at reducing PUE were hitting diminishing returns and may even be headed back up.  Either way, the average PUE should be down to 1.5 or lower by now and continuing to drop but they aren’t.

 

PUE

Image from Enterprise Tech Data Center Edition

 

Regardless of which set of data you believe, the sad fact is that as an industry, we’ve made very little real progress over the last 12 years. Sure there is a lot of positive noise from a few players (SUPERNAP 1.18) like Google, Yahoo, Microsoft, HP, and several other Fortune 100 companies. What’s missing is real progress in the other 90% of our data centers and engineering labs (AKA Data Centers).

Without Improvement we will get Regulated or Embarrassed into Change

We can’t hide forever. Greenpeace has already targeted the big guys, so it likely won’t be long before they realize the potential that’s still locked in the other 95% of data center capacity around the world.  If it’s not pressure from Greenpeace, then it will likely be pressure from your government. In the UK they are already facing this issue of carbon emission taxation on Data Centers, how long before our countries follow suit.

The point is simple; running a data center effectively and efficiently takes dedication, persistence, a cross functional organization, and a specific set of skills and vision.

To the above point, most companies (90%+) don’t run their data centers effectively or efficiently.  It’s not like there aren’t great tools, training, and resources—quite the contrary. We have tools available today that I could have only dreamed of in 2002. We have Building Management Systems (BMS), Data Center Infrastructure Management (DCIM) solutions, and power readings at the PDU. There are wired and wireless sensors for everything from sever location to outside air quality and everything in between. We have the use of outside air, and we can raise chilled water temperatures along with 100 other options that are all easily available to us on the internet, at conferences or via great organizations like TGG, Data Center Pulse, Open Data Center, Open Compute, and ASHRAE.  So, why? Why are we still running our data centers like 17 year olds with their first car?

In order to bring some outside opinion into this topic, I asked a simple question of the Twitterverse:

“Have we made the progress on Data Center Efficiency that we should have over the last 12 yrs?”

Here are a few of the responses:

 

 

 

 

 

Why aren’t we doing a better job?

It’s not the fault of any one role or person in a company as much as it’s a generalized problem of assumptions around the meaning of “data center ownership.”  The single biggest inhibitor to greater success in the data center space is the lack of strong organizational support.  In the vast majority of companies the data center manager is a “space” manager, not the owner of a critical function and resource. A “space” manager worries about whether the DC room access is secure, whether there’s enough power to the new racks, and whether or not there are hot spots. All of these “space” management functions are necessary, but they should be functions of a larger responsibility of “ownership.”  We have to face the facts. Whether we’re tree huggers and care about the future or not, if we don’t address the gapping ozone hole that most data centers are, someone else eventually will.

Without the role of Data Center Owner, we’re unlikely to ever make real headway in the majority of data centers anytime soon. The issue is one of focus, risk, and reward.  The current “Data Center Manager” isn’t, s/he’s more of a “Room Custodian” with a different set of skills. Until the data center is viewed and owned as a system (see Data Center Stack) we will continue to focus on point solutions to specific technical and resource issues instead of looking at the larger picture.  There needs to be a single throat to choke in the organization. Consider the situation where a company needs to build a new 100 million dollar manufacturing plant. Is there any doubt the head of manufacturing would be the sole throat to choke? If the CFO or another Exec wanted information on the performance of that expensive facility do you think they’d call four or five different people? The simple answer is no, they wouldn’t, yet that’s exactly what we do with our data center resources today.

A Challenge

I challenge all data center operators to insist on an organizational design that supports the combination of roles and functions needed for successful data center operations and management. Yes, this means the proverbial Facilities vs. IT battle must be fought, and it means that there will be training required for the individual who’s role would be elevated to “Data Center Owner.” The Data Center Owner would be responsible for all aspects of the data center from real estate to generator selection, to the impact of changing technologies, carbon emissions, and long term planning. This is no small job and for most of us who have had a similar role, it’s one that is learned through osmosis over the course of your career. There isn’t a “data center owner” class you can take.

I also challenge all data center operators to hold their partners to a higher standard of reporting, efficiency and sustainability. While managing a partner is much easier than building and operating your own facilities, it doesn’t change the fact that you still have to take responsibility for security, availability, and efficiency of your IT environments.

A Positive Industry Trend

More and more CEOs and CIOs are seeing that owning data centers is often more of an albatross than an advantage.  I see this as positive because I believe it’s more likely that a service provider will be running a more efficient data center.  Does that mean that there isn’t a need for any internal data centers, maybe, but it will often depend on the services or products the company offers.  What’s more important is how getting rid of the data center headache might be an opportunity to position your company more effectively for the future.  There are so many questions that have no good answers right now; how much public vs. private cloud will I be using? What will globalization of my business mean to data center requirements? How will increased regulations and reporting affect my data center capacity? Will modern equipment actually work in my legacy data center? I could go on and on with these questions, but the point is clear. There isn’t a crystal ball telling you what to build, how to build, and where to build; and guessing with your companies hard earned CapEx and corporate reputation seems like a bad strategy.

Yes, I’m Biased

There’s no way I can deny the fact that I’m biased. However, for those of you who know my work with organizations like TGG and Data Center Pulse, you know that I’ve been pushing for industry improvements for years while also making improvements in the environments I was responsible for. I will also say that I jumped from “internal IT” to the vendor side because I didn’t see the commitment to data center excellence that I see where I am today.  Lastly, you don’t have to take my word for it, you’re likely living this problem today, and if you aren’t you only have to ask a few peers and they will corroborate what I’m saying.

It’s Time

It’s time to do the right thing for your company by building a true data center ownership organization. It’s time to consider leveraging the appropriate partners to help you improve while also better positioning your company for success in the modern world of agile IT.

Related Blogs: 

http://datacenterpulse.org/blogs/mark.thiele/data_center_infrastructure_management_wheres_beef_0

http://www.switchscribe.com/data-centers-are-treated-with-less-care-than-a-batch-of-beef-stew/

http://datacenterpulse.org/blogs/mark.thiele/data_center_surprise_data_center_cost_ownership_and_budget_planning

http://www.switchscribe.com/the-biggest-impact-on-it-firefighting-business-agility-data-centers/

http://datacenterpulse.org/blogs/mark.thiele/single_owner_company_data_center_needed

Professionally Copy Edited by Kestine Thiele (@imkestinamarie)

 

Data Centers are treated with less care than a batch of Beef Stew?

29 Apr

Five Critical Steps in Achieving Greater Energy Efficiency in the Data Center

beef stew

 

Achieving greater energy efficiency is a fairly common goal and certainly the subject is well covered in the press and by industry thought leaders. However, I believe that in most enterprises the focus on energy efficiency isn’t system-oriented or holistic. In fact, I’m here to argue that most data centers pay less attention to the outcome of changes than a chef does when adding ingredients or spices to a beef stew.

1.       Treating the Data Center as a System

The idea of treating the data center as a system isn’t new. In fact, Data Center Pulse published the “Data Center Stack” over four years ago, but the idea still hasn’t taken hold in most businesses. Using a systems approach seems harder than the alternative. The assumption is, “If I use the systems approach, I’ll have to communicate, investigate, evaluate, etc., before I make a change or determine the scope of an opportunity.” The aforementioned assumption is correct; however, just like effective change management and strong process, following a systems approach will likely lead to immediate and lasting benefits of efficiency and risk reduction (putting out fires before they start).

Using the systems approach will allow you to adopt strategies that have a holistic and lasting impact on the data center system. Without a systems focus you are just as likely to introduce inefficiency as you are to make a positive change.  A great example of making a change that would appear obvious on the surface is “raising the temperature.”  While it’s very often true that running at a higher temperature will introduce some efficiency gains, let’s look under the covers.  Increasing the temperature because the servers can handle higher heat makes sense as it means the HVAC works less, which in turn reduces your power use. On the other hand, depending on your server mix and the consistency of your environmentals, you might actually be replacing one problem with another. Yes, you’re saving energy on HVAC, but your ICT gear might be using more energy to compensate for the higher heat.  It’s also true that you are introducing a potential risk that should your HVAC fail you won’t have any stored cool air to keep the servers from overheating and shutting down. This HVAC scenario is but one example of how making a seemingly obvious “positive” change might actually do more harm than good.  Using the Stack to evaluate the full system impact of changes is much more likely to lead you to make changes that actually reduce overall energy use, without introducing additional risk.

2.       Embedding Rewards

We all have a specific focus in mind when we’re working on something. We could be looking to make it faster or bigger. Maybe we want to reduce the cost of ownership or simplify how the customer uses a solution, but we don’t always look at how the solution might affect our use of power or energy.  In order for power use to become part of everything we do, our leaders must ingrain it. We can’t act wasteful and talk sustainability.  Like customer service, it needs to become part of who you are. Only then does efficiency and sustainability become ingrained in the activities of the entire team and everything they do.

There are a number of ways leadership can help introduce or reinforce ideas and or new areas of focus. Demonstrating the importance through action is the best way for a leader to communicate, regardless of whether it’s ethics, customer service, or sustainability. While continuing to demonstrate it, the leader must also speak about it regularly. Lastly, team members will truly internalize a new objective and goal set against those objectives it when it becomes part of their reward system.

Caveat on setting new goals for your team; don’t over emphasize the importance of efficiency or power savings over delivering a better product or service to the business. If your team believes that the one thing that will get them noticed is saving energy, they will naturally focus on that at the expense of other activities.

3.       Keeping your eye on the Ball

Regardless of the hype or reality of any specific new focus area, there is no changing the fact that for the vast majority of us, “saving energy” isn’t in our job title. Our company isn’t a “saving energy for you” company, therefore keeping perspective on those things that actually add new value to the business is critical. In other words, be careful what you ask for and be cognizant of where and how your team members are focused on opportunities.  Saving energy or writing efficient code is rarely the primary deliverable for a new application or service; so think requirements and innovation first, and power savings second or third.

4.       Think Sustainability

Most of us think of sustainability as saving a few gallons of gas or recycling cans and bottles. When it comes to data centers there is much, much more to the story.  Sustainability is directly associated with conserving (reducing cost/being greener take your pick) and continuity (as in Business Continuity). When you plan sustainably you are much more likely to build or use solutions that will stand the test of time, while also helping to maintain or improve your businesses corporate image.

Non-Traditional Examples of Data Center Sustainability:

You should build or lease facilities that can handle higher power density per square foot. The fewer buildings you have to build or lease the better, as it’s both green and financially sustainable for your business.

PUE (Power Usage Effectiveness), CUE (Carbon Usage Effectiveness), and WUE (Water Usage Effectiveness) are all great metrics for helping to drive the right behavior. Using the EU Data Center Code of Conduct or The Green Grid Data Center Maturity Model are also great tools to leverage.

Site location has a huge impact as well. Sustainability applies to your ability to maintain the flow of resources required to continue operations. If you build or buy in a place that needs water but might lose access or where there isn’t a good pool of future employees to pick from, you’re running a risk.

5.       Build power savings in to your designs and purchasing processes

As I usually don’t suggest building your own facility, I do highly recommend that you work with partners who have demonstrated with verifiable metrics that they have an excellent track record of energy management and efficiency of use.  If you are going to build, there’s a long litany of considerations that you must make to ensure the entire data center system is designed around efficient use of resource, especially power.  Keep in mind that every resource you use has an energy and or water factor associated with it. When you take that energy and water factor into account you’ll understand the importance of efficiency and its associated impact on sustainability.

Purchasing should also be involved in your strategy to become more of an energy sipper than a guzzler. Purchasing doesn’t always have visibility into the underlying drivers for one product or service selection over another. Helping them understand the value of the entire lifecycle and supply chain impact of each product or service choice will help them more effectively prioritize decisions against factors other than price.

It’s not just about “a” technology

It would be nice if we could just buy cloud or lease data center space and assume our work was done relative to gaining more efficiency and becoming a more responsible corporate citizen, but you can’t.  It takes much more than a focus on a specific service or technology it takes a cradle to grave systems oriented view of the entire IT/Data Center envelope.

Regardless of whether it’s Beef Stew or a Data Center

Whether making a batch of stew or operating a data center, the fact is, you can’t just add things to the recipe or change it without understanding the potential outcome. In the case of beef stew, you actually have considerably more leeway than you do with a data center, but it’s still a problem if you just change ingredient quantities, add things at the wrong time, or leave something critical out. The end result is that the food won’t taste right. At least with stew you only invest $20 and a few hours and then do it over again if it’s not right. It’s not so easy with a data center.

Professionally Copy Edited by Kestine Thiele (@Imkestinamarie)

 

The Biggest Impact on IT Firefighting & Business Agility – Data Centers

08 Apr

Protection & Value

Is it reasonable to assume that if you’re buying a safe for all our valuables that you’d buy the one that is the best combination of security and cost. This combination of security and cost would be driven by your budget and the value (intrinsic or sentimental) of your precious items. I would guess that the same principle of budget vs. value would apply to protecting your IT environments.

So many places to look, so many holes to patch

The normal enterprise IT environment is filled with hundreds of applications. In most cases each of these applications is supported by unique design at the hardware and software level, if not also at the network layer. The fact that there is so much uniqueness about our IT environments means we expend inordinate amounts of time dealing with common problems in 100 unique ways. Maintaining these environments has become the bane of enterprise IT groups. By now, we’ve all heard the story of how keeping the lights on comprises 70-80% of the IT budget leaving only a small amount for much needed innovation.

Keeping the lights on has several meanings, including the mundane but critical “general maintenance and support” of each environment. However, keeping the lights on could also mean avoiding outages. Generically speaking, all of us in IT attempt to build and maintain environments with the highest possible availability (within budget and available resources).  The problem is that we’re often spending too much time fighting fires of “maintenance & support” and not enough time solving the underlying issues that cause many of the fires or in this case cause many of the outages (same as a fire only worse). Where should IT focus its attention relative to avoiding outages and or reducing the number of fires?

If you can’t focus on everything

Few IT organizations have the luxury of being able to throw as much money or bodies at a problem as they’d like. So, if you have to pick which efforts will provide the most “keeping the lights on” value for the dollar, you should pick something that can be fixed for everything, and fixed once. How is that possible? How can I fix something for everything, isn’t that the opposite of focusing on something important and avoiding getting lost in the fire? The simple answer is no.

Data Center as a Platform (DCaaP)

There are only a few services and solutions that affect everything in IT—one of them is the data center and the other is people.  The two areas of people and data center are where the most value can be gained in virtually any IT organization when it comes to reducing risk and the threat of fires. That’s right, just two areas, both of which can be worked on with minimal impact to active production environments and in most cases without too much additional expense.

It’s well documented that humans are almost always the biggest single risk factor to the availability of systems. The more humans need to be involved, the more likely a mistake will get made and a failure will occur. We all talk about hardware failure and power failures, even viruses and software bugs, but if you want to reduce risk, you reduce the human touch factor.  The simple answer is that you need a combination of three things: good leadership, excellent process/automation, and solid training.

When it comes to owning and operating a data center as a system, it begins to get a little more complex. Most organizations fail to treat the data center as a system and are constantly dealing with components or services independent of the DCaaP.  While there are hundreds of discrete components and services that make up a functioning data center, it is no different than how you might work on a car. You don’t talk about replacing the tires on your car without considering whether they will fit on the rims, fit in the wheel well, or cause the handling to change. The same holds true with a data center as there is virtually nothing in a data center that can be changed without having some effect on the performance of the system.

Some of the more well-known discrete components of a data center include power, HVAC, security, water, and environment (I.e., humidity, cleanliness, & temp). Any one of these areas could be cause for fires in IT but all of them together combine for high overhead if they aren’t expertly managed. Oft overlooked is networking/connectivity. Which, while a part of the data center, connectivity is also an underlying service bridging all applications. All of the aforementioned DC services, including connectivity should be considered part of DCaaP. Imagine if instead of buying a bunch of air conditioners, routers, UPS units, PDUs, racks, sensors, ducting, cable, ladder rack, etc., you could instead buy a package? This likely isn’t news to anyone, but that’s what a colocation provider is supposed to offer; Data Center-as-a-Platform. Not all colocation providers are created equal, so just moving from your data center to theirs won’t necessarily solve any problems and in fact could cause new ones. The real opportunity is moving to a colocation provider that can improve your level of service by driving down the risk of fires to zero (0) and increasing your ability to address new opportunities.

Imagine improving customer and employee satisfaction, while giving your employer better tools, simultaneously lowering costs and reducing your carbon footprint.

Remove the issue and focus on business enabling priorities

Generally speaking I don’t believe in washing my hands of a problem, I prefer fixing it. However, building and operating data centers at the highest levels of efficiency, performance and availability isn’t for the faint of heart. As mentioned earlier, the majority of businesses don’t have the organizational alignment that allows for building and running first class data centers. It’s also true that building a data center means locking in a 15 year business plan and CapEx investment that isn’t easily adjusted on the fly. In the modern IT space, building a 15 year business plan and locking in a bunch of CapEx isn’t conducive to agility.

Reduce your overhead and fire risks while improving agility and lowering costs

Find a data center partner that makes it their life’s work to provide their customers the equivalent of an Indy car for agility, a Tesla for efficiency and safety, and an armored car for security in the form of the most efficient data centers with Tier IV Gold availability ratings combined with connectivity options beyond compare. What else can you do in IT that will reduce your overhead and put out many of your fires while also improving agility and lowering costs?  Another way of looking at this opportunity is that you’re helping to “future proof” your investments. When it comes to value, what better way to obtain value than by actually improving your operational capability and agility? Wouldn’t you rather focus on capability and agility first and have efficiency go up while costs go down, all as a side effect?

 

Professionally copy edited by Kestine Thiele (@imkestinamarie)

 

 

Is Shadow IT George Washington or is it Donald Trump?

27 Mar

In U.S. history, it’s fairly well recorded that George Washington neither sought nor wanted the office of president.  He reluctantly served a second term and refused the third.  The nature of Washington is considerably different from the majority of the rest of us, and certainly far different than that of Donald Trump. Trump made it very clear that he’s sticking to his guns, even when the effort to discover President Obama’s foreign birth was proven futile. In fact The Donald misses very few opportunities to let the public know he’s the man.  How does this relate to Shadow IT, read on.

Hasn’t Shadow IT been talked about enough?

The discussion around the good, the bad, and the ugly of SIT has raged for several years. However, I’m hoping that taking a non-traditional look at one of the critical underlying issues traditional IT and SIT commonly face when dealing with each other will give the leadership of each group new food for thought.

If you’re not familiar with the risks and rewards associated with the competition between SIT and the IT organization, I urge you to find some good articles* to give you more background. On the other hand, if you’re familiar and or struggling with the issue right now, then please, read on.  I’m not going to say whether SIT is good or bad, but I will attempt to shed some light on the “human” part of the equation.

I’ve been on both sides of this equation as and feel I understand fairly well why SIT (alternative) organizations are initiated. This blog isn’t an effort to say whether SIT is good or bad, but rather to shine a light on how the relationship with IT and the original goals can go wrong.

The battle between Shadow IT and the IT organization

As with most situations that involve humans, the “human” part of the equation must be taken into account if you wish to have any hope of getting to a satisfactory resolution. If you aren’t considering the human equation when dealing with a human problem, you’re missing the most critical aspect of the issue. It would be like ignoring the fact that you are driving a gasoline powered car and when it runs out of gas you put water in the tank and then yell at it when it won’t start.  What makes a mechanical device or a human tick is crucial to understanding how to deal with it. The easy problems to point at when SIT and IT are bickering are the “symptoms,” not the underlying dynamics that make the symptoms persist.  One could argue that the underlying dynamic is the perceived or real failure of IT to deliver, which then creates a space for SIT to fill. In many cases, you would be correct as to what “appears” to create the situation.  However, I would argue that in most cases it’s not the spark, and in many cases it’s not IT’s on-going delivery problems that keep SIT alive or make it thrive.

Why is Shadow IT often like Mr. Trump and less like George Washington?

When a cause is created the creator has a tie to it that is tough to break (President Obama’s Long Form Birth Certificate). In many cases, humans who have started a cause will continue to pursue the original aims of the cause long after the benefit or need ceases to exist (I’m sure Trump really believed that Mr. Obama wasn’t born in the US). When the need subsided Trump couldn’t let the issue go.  As humans we associate ourselves with our causes; for many of us, the “cause” is the work function we’re responsible for, and this association is what makes us who we are, right or wrong.

As the leader of a SIT group, you are by nature (in the majority of cases) at odds with the status quo (IT).  You’ve made them the enemy and as such, must defeat them through any means possible. Yes, that’s right, I said “any means possible.”  When you need to prove your value in the workplace, you generally have two avenues to pursue: work better and harder than everyone else or make the other guy look bad.  When it comes to dealing with traditional IT, you probably started your SIT group under the assumption that IT already looks bad.  What happens if IT does well? What happens if the real need for SIT seems to be diminishing? When IT starts to do well, your first reaction is likely to be, “I’ll ignore them,” and if they continue to do well, your next action is likely to be the initiation of the passive aggressive (PA) phase of the relationship.  In the early days of the PA phase you are anxious to point out perceived failures of delivery more aggressively; you might even try spreading doubt in the minds of your business leaders about the ability of IT leadership to succeed.  In the case of Washington, he “needed” to do a job that the people demanded of him, but his goal was “solve the problem,” rather than create an empire.  What do most leaders want to do? You guessed it—they want to build an empire. If they feel someone is threatening their empire, do they ask themselves, “I wonder if my opponents arguments are correct or if their services are actually better now,” or do they close the drawbridge and man the battlements?

No easy answers

  • Unfortunately, there’s no easy button for solving the SIT / IT death match, but there are some things that are worth trying.  Mind you, I’ve tried some of the very advice I’m offering, and if you don’t have the right leadership in place above SIT and IT, then it likely won’t work, but it’s the best I’ve got:
  • Develop strong communication channels between the two groups. As the old saying goes, “Keep your friends close and your enemies closer.”
  • Ensure regular dialog between members of executive team above each group. If they’re aware of what each group is doing, they are less likely to make a rash decision about the future of either group.
  • Find a way to develop a partnership. There’s usually enough opportunity (read: hard work) to go around for everyone.  Find projects that both teams can work on together and or give the other group a project that they might have better skills to handle or maybe they have more time and better funding.  In the end, leadership is about getting stuff done. If you’re getting stuff done, no one really cares how it happed or who did the work.
  • Look for opportunities to help the other in a time of need, but ensure everyone knows about the arrangement.
  • Make sure your counterpart is fully aware of the limitations you’re working with. You might be able to develop an ally from an enemy.

Happily ever after?

If you’re lucky, the two groups will learn to trust each other and stick up for each other during debates over issues and or funding.  If you’re really lucky, one of the two leaders will show their true leadership abilities by convincing their counterpart that a merger is the right approach. The only way to succeed is by keeping the human equation in the back of your mind during every debate, argument, or struggle for shared funding.  The reason for your counterpart’s position isn’t important—what’s important is that you deal with it in a way that’s likely to show results.  As evidenced in the news, you can throw rocks and yell all you want, but many leaders would rather lose everything than give up their failed cause.  Take a long look in the mirror and be honest…am I George Washington just trying to do the right thing or am I Donald (a fool for all seasons) Trump?

 

A cherry picked “Rogue IT” blog by @jeffsussna

http://blog.ingineering.it/post/51501409916/why-it-needs-design-thinking

Blog copy edited by Kestine Thiele (ImKestinaMarie). I made some updates after Kestine finished editing, which likely ruined some of her perfectly good work. 

 

Data Center Project Euphoria vs Reality of Ownership

25 Feb

Building a Data Center equals Big Project Euphoria – It’s addictingNew house

The excitement of building or acquiring something new is real, and it’s a thrill. You could be building a new pool in the backyard or a new data center for your company, but the story is the same. The thrill of working on something with an important and well-defined endpoint is palpable, and when you combine that with the narcotic of vendors and contractors worshiping you while you spend money, it’s downright intoxicating. Building or buying a data center is the “easy” part; “owning” it is what’s hard.

To “own” is more than to possess—like “owning” a car—there’s a little more to it:

-          Owning a car means more than buying it. Owning means you must also insure it, clean it, service it, repair it, evaluate it for safety or sustainability, and eventually recycle or retire it.

The Question

I was speaking on a data center panel for Bisnow in San Francisco on Feb 20, and a question from the audience was tossed my way. I don’t remember the exact wording (so I’m paraphrasing), but it was something like this;

“What advice would you give to someone looking to build their own data center?”

Great question with an answer that’s potentially a mile long, but here’s how I responded:

“95% plus of all companies have failed to create the appropriate organization to build, operate, protect, monitor, sustain, and lifecycle a complex system like a data center.”

Then I went on to say:

“I’ve worked with some leading technology companies, and without exception, even those companies that should have known better, couldn’t or wouldn’t accept the fact that data centers deserved a different ownership model.”

I continued:

“Most companies fool themselves into believing they understand and have planned for the ramifications of owning a data center because of project euphoria. The siren song of the big project is just too much for most IT folks to walk away from.”

Temporary high

During the project, everything seems great—cats and dogs (facilities and IT) working together, finance helping out, the CFO chatting with you in the cafeteria, the CEO mentioning the data center at town hall meetings or to investors. Can you feel it? I can feel it, and all I’m doing it writing it down. We haven’t even started talking about the draw of having all these big vendors/partners fawning over you, treating you like you’re a king or queen; your wish is their command.

The problems start six to twelve months after the project’s completion. The glow has begun to fade, day to day responsibilities and priorities come creeping back, and what do you expect happens next? As the glow fades, the teams focus on owning an expensive, complex, and critical facility begins to fade as well. Gone are the cross functional meetings where everyone sang (past tense) Kumbaya, also gone are the pats on the back from the CFO. The CEO has gone back to forgetting your name, and what do you think happens next? You guessed it: focus shifts. Cost savings assumptions are missed, but not captured. Efficiency guarantees are faked or avoided. Operational performance might even start to lag. It’s no one’s fault; it’s basic human nature. When we fail to capture the human equation in projects, leadership, friendship, etc., we will eventually fail at whatever it is we’re doing.

Additional Color from the next panel of experts

John Sheputis, the CEO and Founder of Fortune data centers, was on the panel after mine and after referencing my “euphoria” answer, he went on to add some additional color to better illustrate the original point. John basically said, “The type of work and focus needed to run a data center effectively is very different than running a short-term project. A data center requires day in and day out focus on being perfect and making marginal improvements, while avoiding risk to production operations.”

If it’s not already clear

What I’m trying to say is that owning a data center is a huge responsibility, and the bottom line is that few organizations are designed, measured or rewarded appropriately to also run the data center effectively. So think long and hard about why you really need an internal data center. If after thinking it through you still believe that owning is better than renting/leasing, then by all means build a data center. However, before you do, be sure to get corporate financial and organizational support lined up and guaranteed so that you can continue to own the facility effectively for 15 years after its completion.

 

Cloud Management – It’s Not Just A Job

21 Jan

I was having a conversation with the CEO of a software company recently, and we got on the topic of cloud management.  Of course, there are a number of ways this conversation could go, but we were largely focused on the Not just a jobadoption of specific cloud management platforms like Eucalyptus, OpenStack, and CloudStack.  As the conversation drifted towards which product is suited for which business type, a light bulb went off in my little brain. During that momentary and fairly dim flash of light I had the realization that maybe the whole world hasn’t even accepted that a cloud management platform is necessary or important.  Blasphemy!

Buyer type vs. demand and market adoption

I don’t have the actual numbers, but I’m hypothesizing that fewer than 30% of businesses have accepted or internalized the idea that they will need a mature and supported cloud management platform (CMP).  While 30% is a big percentage, it isn’t the majority.  I would also be willing to bet that the majority of the 30% are companies that tend to be forward thinking and developer oriented. If my assumptions are correct, that leaves

70% of companies not knowing why they need CMP or if they need it. These companies also have no plan or at least no effective plan for prioritizing, reviewing and then selecting a CMP solution.  The common characteristic among many of these companies will likely be the difficulty in justifying the investment in making a solution fit their needs, when their growth and pace of change don’t demand it. In other words, ROI is difficult to capture.

When we talk about companies adopting one of the aforementioned CMPs we commonly refer to the name brands of Paypal (Openstack), Zynga (Cloudstack), or Sony (Eucalyptus).  Each of those three company names helps to increase buyer confidence because they can see that forward thinking enterprises are adopting. However, what these company names don’t cover is to what extent the CMP is being used, and whether or not a 1/10th scale need would still provide ROI to the buyer. There’s a gap in industry messaging and buyer understanding among the three players discussed here. That gap relates to why a CMP isn’t just another HP Openview or IBM Tivoli. In the past the messaging around a product like Tivoli was, “We can solve all problems relative to monitoring, and reporting on your infrastructure and to some extent your applications.” While both of these legacy infrastructure tools were great, the vast majority of companies (I’d venture about 70%+) couldn’t justify the expense of the product, work effort to install, and on-going support. So what did they do instead, they got 80% of the benefit from 15% of the cost and overhead by installing something like What’s Up Gold or Solarwinds.

What’s needed?

There needs to be better education available on what the customer can “really” do at their specific scale at this specific time in their history.  Another way to address the issue would be to answer this simple question for the customer, “Where and when will I fail if I’m not using a CMP?.”  Even with proper education I don’t think the 30% number will change that much for buyers of either OpenStack or CloudStack. However, the opportunity for a product like Eucalyptus to bite into the 70% is quite real.  I believe the opportunity is real for the simple reason that most companies have VMware, and if they’re on the smaller side are likely to be looking to Amazon for some of their cloud needs.

I could add even more complexity to this discussion by talking about where solutions like ServiceMesh (CSC) and Entratius (Dell) fit on top of this equation. Instead, I’ll just point to a past blog on the strategic rather than tactical nature of your cloud management choices.

I’m a believer

I strongly believe that a well-managed cloud environment is critical to success at many levels including ROI, risk mitigation, security, ease of deployment, and vendor choice, etc.  However, I also believe that there isn’t a one size fits all cloud management solution and today the big three to select from are OpenStack, CloudStack, and Eucalyptus. While OpenStack and CloudStack fit neatly in very similar territory (SPs, Large enterprise, web scale environments), Eucalyptus fits a little more neatly into the everyday buyer category. The trick for Eucalyptus at this point will be determining the best way to represent a complex, but easily consumable solution to a group who believes they only have simple needs.

 

Professionally copy edited by Kestine Thiele (Imkestinamarie)

 

Thiele’s Blogs from 2013 – Most read, re-tweeted, argued about & commented on!

08 Jan

Rearview mirrorOf my 40 plus total blogs from 2013, the following from www.switchlv.com/switchscribe & datacenterpulse.org made the most noise when they were published. In fact, several of them received well over 4000 reads. Since many of these posts are still relevant, I thought I’d share again for those who missed them the first time through.

Blogs from: www.switchscribe.com Nine Selected

February:

Don’t buy, build or lease space in a datacenter with wood in the roof or ceiling!

There was considerable debate about this as many data centers in use today are not purpose built and therefore struggle with existing structural gaps.

March:

Not Your Daddies Data Center

My thoughts on the new found importance of the data center, in combination with characteristics that will make the data center successful. This was my most read Data Center specific blog on Switchscribe with over 5000 reads.

May:

Traditional Measures of Data Center Performance are inadequate

Many of us don’t apply good measures to begin with, but even worse is that the measures we do have are largely inadequate.

June:

The Server Closet Must Die!

One of the most hotly debated blogs of 2013

July:

Data Centers as Global Growth Enablers

I see the modern data center as infrastructure akin to the transcontinental railroad, manufacturing, communications, etc. Many still see it as a place to put servers.

August:

Top 12 Data Center Trends thru 2015

Self-explanatory!

 

September:

I can see clearly now the clouds are gone

My first and only blog with a song as a theme!

October:

OpenStack – The future of Infrastructure Management?

This is another of the more contested blogs. In fact I’ve probably had more continuing and follow up questions and discussions on this blog than any other. I plan to write a follow up on the topic in coming weeks, as I believe there is opportunity to better define the space that a cloud management platform resides in and who the likely user will be.

December:

Public Cloud vs. Private which is Better?

It’s not really about which is better in the abstract, but rather which is better where & when. In retrospect I wish I had been more specific about how hybrid cloud will play an important role here.

Blogs from: Datacenterpulse.org Four Selected

January:

No Man is an Island and Neither is your Data Center

A discussion on the importance of location for critical infrastructure

June:

The Pain and Risks of Ignored IT Infrastructure

This blog received almost 4000 reads, and was discussed at length on Twitter and LinkedIn. Apparently, I hit an open nerve with many readers.

July:

Keeping IT Relevant isn’t About the Title of the CIO

This blog received almost 4500 reads. It’s really just my take on substance vs. naming for the role (current and future) of the CIO.

October:

Enterprise Legacy Environment Cloud Adoption vs. Netflix

Another of what I like to refer to as “common sense” oriented blogs on how and why certain organizations will have different cloud adoption strategies.

 

Public vs Private Cloud – which is better

17 Dec

Cloud Image Success and FailureCloud Adoption Trends – Is Private Right?

There’s an amazing amount of teeth gnashing around private vs. public cloud these days, much like in previous days. In this case though I’m not going to even entertain the discussion on whether private cloud is real, rather I’m going to talk about how different IT organizations might approach a cloud decision considering their own unique variables.

There isn’t a one size fits all answer

Nope, cost isn’t the answer, technology architecture isn’t the answer, and security isn’t the answer either. In fact, no one answer is correct, but in some cases all of them are. The first priority for the business is running a successful enterprise. The first priority for IT is to provide solutions that help the enterprise achieve that success however the business chooses to measure it. In other words, the right answer is the one that best enables the success of business objectives. The right answer could be private, public, mainframe or all of the above, and that answer will be dynamic, just as modern business is.

Common Private vs. Public Cloud Decision Themes

There are a wide range of requirements that the industry and those of us who write about cloud have used to try and convince you are always best. I’m here to say that there is no “always” when it comes to picking IT solutions and cloud is no exception.

Public Cloud Themes

Private Cloud Themes

Massive scale requirements Control
Cost Security
Staffing Cost
Geo-Distribution requirements Support structure
Speed of access & delivery Compliance
Options Steady consistent business growth
Ecosystem of partners Agility
Support structure Well understood usage
Agility Ecosystem of partners
Elasticity at scale Speed of access & delivery
Compliance requirements Compliance requirements
Rapid Business Growth Legacy Applications & Infrastructure

 

Weird huh

No, your eyes aren’t deceiving you what you’re seeing in the above table is in fact true, many of the supposed “drivers” for selecting public vs. private (in the right circumstances) apply to both options. The magic is in where and how each of the above variables applies and what importance they should be given.

Each company must make its own value judgments

Cost is still often identified as a key benefit of cloud adoption, as is massive scale, geo-diversity, and elasticity at scale. The issue with any or all of the aforementioned cloud “qualities” is that they only matter under the right set of circumstances. There is incredible complexity associated with defining the best option for each company at the time they’re making the choice, and as such, the following three scenarios have been simplified. I’m sure that if I attempted to cover every scenario I would need at least 100 different models.

Scenario 1: 20 plus year old mid-size enterprise (Not internet driven) 

Legacy 1000 plus legacy applications Many won’t be moved to cloud (any cloud) for 5 years or more
Staffing Small IT team with limited developer skills Likely to want packaged solutions that allow them to gain rapid benefit, even with a cost premium
Well Understood Usage Applications are well understood in the usage characteristics (limited short term scale needs) Having limited elasticity requirements means the “massive” scale of public cloud doesn’t provide additional value
Agility in moderation Agility is desired for competitive advantage Rapid delivery is important but not measured the same way as an internet driven business (Days vs. months is OK).
Steady moderate growth Business growth of 7% or less CAGR Growth implies fewer surprises for IT requirements at scale
No real geo distribution needs Limited geographic diversity requirements of apps Have a few locations but primary app usage is at HQ

 

The likely answer for Scenario 1 is private cloud, with public cloud used for a few applications and some development. The fact that the majority of workloads are well understood and don’t experience significant usage spikes means that a slightly over provisioned private cloud environment is likely more cost effective in the long run. The limited size and experience of the team also means they would most likely benefit from a packaged cloud solution (converged infra with a CMP). As speed isn’t the primary requirement and there are only a few key office locations the need for a widely distributed public cloud based application set is diminished as well. The focus for new applications should be on SaaS wherever possible.

Scenario 2: 7 year old internet facing business

Legacy Apps Limited set of legacy applications Active projects underway to retire all of them as opportunity provides
Staffing Good sized IT team focused on enabling an internet driven business model Focused on solutions that can scale and scale at a manageable cost.
Elasticity, Geo-Diversity, Support structure Primary applications are internet facing and each of them can vary wildly in use depending on product launches and seasonal buying patterns Elasticity at scale is critical, along with an ability to rapidly deliver updates.
Agility Agility is desired for competitive advantage Agility is measured in hours vs. days and applies to the entire company
Rapid growth, geo-distributed, Internet oriented Business growth of 15% or more CAGR Growth can be volatile and difficult to plan for
Geo-distribution Widely distributed work force with developers and contributors in offices all over the world. Also, a customer base that is globally distributed and dynamic. Geo-Diversity is critical for application, performance and fault tolerance

 

In scenario 2 the primary usage characteristics for the company (global distribution, speed to market, many locations for customers and staff/engineering) suggests that the focus should be put on utilizing public cloud for most solutions. If there are internal focused applications that are fairly steady in their use, then the addition of hybrid and or private cloud could make sense.

Scenario 3: 20 plus year old large financial Institution

Legacy Applications 1000s of custom built applications, some delivering millions in revenue Some projects to retire or move applications to stateless environments. Heavy focus on building new apps for cloud. Many existing apps impossible to move to public cloud
Staffing Large IT team focused on enabling large scale cost effective performance oriented infrastructure Solutions that scale, potentially involve staff investment in open source (I.e., OpenStack/CloudStack/RedHat/Automation, etc)
Elasticity, Geo-Diversity, Support structure Extreme elasticity applies to a few key apps for trading, Monte Carlo, big data analytics etc. In some cases this could be handled by public cloud/grid infrastructure. Some applications will be better suited in hybrid/private cloud
Agility Agility is a driver like most businesses, but is not the sole reason for cloud operations. Agility is measured in days vs. months
Steady growth, geo-distributed, heavy IT investment Business growth of approximately 12% CAGR Growth is fairly well understood
Geo-distribution Widely distributed work force with developers and contributors in offices all over the world. Also, a customer base that is globally distributed and dynamic. Geo-Diversity is critical for application, performance and fault tolerance
Compliance Heavy regulatory and compliance based risks Require contractual guarantee with providers or internal solutions

 

With scenario 3 the environment is fairly complex and doesn’t fit into a single solution. The fact that they have a strong IT team and fairly large internal set of applications means private cloud is a real option for them. They are also likely in a position to be able to develop unique private cloud environments instead of focusing on pre-packaged converged infrastructure. However, public cloud is potentially an ideal solution for some of the customer focused applications and or applications requiring elasticity at scale. An open item is compliance; using public cloud would depend on the provider’s credentials along with usage trends for any given application.

It Depends

As you can see from the definitions of each company’s unique environment and how those unique needs help define the priorities and strategy for technology adoption, there isn’t a one size fits all option for cloud. My belief is that for the next 5-10 years we’re likely to see the majority of companies with over 200 employees using a hybrid set of cloud based solutions, which include private, public, hybrid and SaaS. Farther down the road, who knows, maybe we’ll get to the magical low cost commodity cloud that will suit all.

Use the scenarios

Using the scenarios provided as a model, you should be able to ascertain some of the critical decision factors in making a cloud choice for your organization. Having a firm grip on what your teams are capable of, in combination with what a specific solution requires will help you to better position IT as a partner instead of a roadblock.

 

Cloud Adoption Trends – The Ascent of the Cloud Vertical

03 Dec

Just about everyone is attempting to define the future of IT infrastructure ownership right now. However, there is little agreement amongst the IT and vendor communities as to what the future of cloud adoption looks like. Will we have one giant grey cloud? Will there be industry verticals? Or are we all going to continue building private environments? The likely answer to these questions is “YES.” The idea for this blog came from a discussion with Jason Mendenhall (@jasonmendenhall), EVP of Cloud at Switch. He and I are regularly discussing and debating the potential future state of IT.

There is no “one” cloud

Benevolent Cloud

I’ve written that there is likely no single cloud offering that accommodates all the specialized needs of SML enterprises, and I think more and more of us are coming to that same conclusion. The evidence of the realization that we have a multi-cloud future is the cash being spent on acquisitions. In the last few months, Enstratius (Dell), ServiceMesh (CSC), and Tier3 (CTL) have all been purchased. A primary focus for each of these companies was the support for an expected enterprise need to run and manage a diverse cloud environment. Now that the aforementioned companies have been acquired, it’s safe to say that their acquirers believe in a multi-cloud future as well.  Service providers are ensuring that the multi-cloud strategy can be effectively run by their teams.  And yes—that means there is a cloud service to operate and manage your clouds unless you feel the need to own the software.

Variables in IT and business drive need for a multi-cloud universe

There are a significant number of drivers that dictate what type of infrastructure is best suited to support a specific workload requirement. These workload variables that determine whether we’re delivering the best infrastructure in the right place for the right cost are part what keep our industry interesting.

Variables by category:

Bigdata CloudPerformance options:  I/O, bare metal, low power CPUs, high power CPUs, more memory, flash, bigger disk, Fiber, Infiniband, etc., etc.

Company:  Size, maturity, global distribution, industry, appetite for risk, growth expectations and more

Staffing models: Developer oriented vs. operations, skill level in modern disciplines, and experience with IT transition requirements for a move to agile IT

Application landscape: Complexity of integration, percent legacy vs. cloud ready, data intensive, history of build vs. buy, etc.

Customer type & expectations: Customer performance requirements/success criteria will drive infrastructure design considerations. From one company to the next, the customer might have very different performance requirements of the same application.Compliance Cloud

You should also consider that almost every day new tech for application and infrastructure design is introduced to the market. In many cases this new tech combines with a new market opportunity which could drive the need for scale or performance over price, etc. What does this complex set of variables suggest about our future adoption strategies?

The future is cloud verticals in place of application centric vertical infrastructure designs

Our future IT environments are likely to include three to seven distinct cloud solutions. Some or all of these clouds could potentially come from one “future” vendor, but the more likely scenario is that you’ll have two to four providers.  These clouds will be optimized around the requirements of specific workloads, HPC, big data, web scale, back office, high transaction, high I/O, security, shared use, private,  compliance requirements, etc. In effect, we’ll be recreating the verticals we’ve all come to hate in our legacy environments. Now hold on a second…just because we hate the legacy verticals doesn’t mean we have to hate the cloud verticals.  When we consider the transition period of moving off of legacy apps in combination with flexible ownership models, the impact of using cloud verticals is much less negative than that of a legacy environment. It’s also true that a serious industry focus on solving the headache of managing heterogeneous environments will make owning several clouds much less painful than owning a hundred different hardware stacks.

Legacy verticals

Historically, legacy IT environments were built based on the specific needs of virtually every application. If you were installing MS Exchange you would build a set of servers or a server cluster in one way. If you were putting in a finance application you might build something very different. The long term impact in aggregate of this legacy vertical infrastructure design pattern was effectively a highly complex, very costly, pile of smoking (place word of choice here). This legacy design pattern was one of the leading reasons for the adoption of virtualization.  As the capabilities of servers improved and virtualization was widely adopted, many IT organizations worked to reduce their infrastructure verticals into pools of common resources based on expected performance requirements. Using virtualization had the potential to reduce hardware use, improve availability, simplify recovery, and consolidate a huge number of verticals into a relatively small number (100s or 1000s to < 10). However, even with virtualized pools of resources, IT groups were often forced to make pragmatic decisions about their design characteristics. Either they built for the highest performance requirement or some middle ground. The opportunity with easily adopted cloud is that we can use infrastructure designs highly optimized to the needs of each category of application, without the need to worry about long term ownership and TCO headaches.

Cloud verticals

Today for the majority of midsize or larger businesses, a multi-cloud environment designed around delivering the right performance and capability at the right cost is likely the best option. Also, keep in mind that many companies are already offering or using the services of many different clouds from vendors like Google (GOOG), Microsoft (MSFT), NetSuite (N), Servicenow (NOW), Intuit (INTU), Salesforce (CRM), and many more in the form of SaaS. Each of the clouds used by these vendors is designed specifically for their application set, and they include each company’s special infrastructure sauce. These vendors offer a “light’s out” path to solve a problem. They are the chain pliers to the chandelier installer. But what happens when your cloud needs drop below the application only layer?

The glue holding the cloud multi-verse together

Three things are required for effectively selecting, implementing, operationalizing and benefiting from a multi-cloud environment:

  1. An organizational design that supports all the benefits of an agile IT delivery model
  2. A management platform (or combination of platforms) that enable common policy, governance, deployment, lifecycle, and DevOps processes across clouds
  3. Tools and methodologies for selecting and contracting the appropriate cloud resources

Of course there are other things you need in order to manage modern IT infrastructure successfully, but the above three are core; each bullet covers several important points.

If it was easy

If managing complex IT systems was easy, anyone could do it. This transition will be tough, but it’s definitely plausible and worth it. Making the right value judgments about what to own, how much to own, and when to disown will be critical for long term success. So go out and be fruitful. Plant all the clouds you need, whether it’s one or five, so you can deliver the best possible opportunities for your business, and do it within an ecosystem that can support it.

Additional Related Resources:

There’s No Need To Be A One Cloud Company

Cloud Management Is A Corporate Strategy

Openstack – The Future of Cloud & Infrastructure Management

Enterprise Legacy Cloud Adoption vs. Netflix

The Importance of a Strong Technology Ecosystem

 

The impact of Mobility on your data center decisions

07 Nov

Why is the trend towards mobility a major factor in your data center design and deployment strategy?  Will data centers become obsolete or will they require new designs? Maybe we’ll need to build them so they look like a smartphone? Maybe I’ll be able to talk to my data center, even give it a name. The whole idea for this blog was generated by a conversation with Mr BYOD Brian Katz (@bmkatz), who is always a great guy to test ideas with.

Mobility doesn’t change the physical data center per se

Why would anyone think an increase in mobile (smartphone, tablets, and laptops, etc.) use might have an impact on data center design or use? There are a number of likely reasons, some valid and some based on misconceptions about what a data center is and how mobile solutions use them.

There’s a difference between a “data center strategy” and “data center design”.  There is also a delineation to be made between what Cisco, HP, or VMware call a data center (the IT infrastructure) and an actual data center facility. The impact of mobility is much more likely to be felt in data center facility strategies. If we think through how the use of compute in a mobile context utilizes backend systems like the data center, we might gain a better view of reality vs. hype.

Let’s start by ensuring we’re on all the same page relative to what constitutes mobile. Mobile in the context of this blog pertains to all forms of compute, communication, data creating/sharing, and or consumption/output devices that are easily portable. The next thing to remember is that a text message sent via a cellphone is no different than one sent via a desktop computer. An email generated on a tablet is exactly like an email created on your office computer. Data generated from wearable computing devices such as a Fitbit or Google Glass are no different that data created on an office based machine or a video camera.

So what’s the difference? There are several real differences between traditional compute models and the mobile options I mentioned above. The primary differences are where and when data is created, how it’s transmitted, how it’s likely to be used, and lastly how it’s likely to grow.

The real impact on your data center strategy

Let’s step back for a second and think about the data center strategy of old (1970 – 2010), which was very fixed in nature and tended towards fewer data centers mostly owned by the companies creating the data. In other words, a company would define its data center strategy on a fairly simple set of criteria associated with office population locations, where specific types of work would be done, and then the scale and resiliency requirements. There are many other criteria but these are the biggest drivers that dictate where, how many, and how big your data centers were.

In the mobility influenced future, how will a company easily determine with any real degree of accuracy where most of their work will be done and how long it will be done there? This is one of the biggest questions that mobility places on the doorstep of the IT organization. Historically, owning and building data center has been like a living with a fifteen year business plan. No (effective) business plan has ever lasted more than three years, so the old strategy was already broken, but now mobility compounds the situation. Let’s breakdown my last sentence from the previous section The primary differences are where and when data is created, how it’s transmitted, how it’s likely to be used, and lastly how it’s likely to grow.

Where and when data is created – The fact that data could be created in different locations with varying degrees of scale and importance creates several points of stress. These points include traditional office-data center network connections and the location of the data centers in relation to where the data is being used by applications or the consumer. In the case of the network, since you no longer have fixed point-to-point location use characteristics your application designs and your global network capabilities will need to be assessed and modified. You must ensure that regardless of where there’s demand, your infrastructure and data center locations can support latency and performance requirements. How is that problem resolved relative to data center locations? My suggestion regarding data center location decisions is that we need to develop our data center strategy using much the same thinking that would go into how, where, and who does our manufacturing. Few large companies today do the majority of their manufacturing for obvious reasons of investment, scale management, location and or political benefits, business dynamics, etc. How we utilize compute in the modern era of mobility and agile operating models is no different. So the need here is to strategically consider what you should own. Identify partners who can help you manage regional and global distribution via cloud, hosting, or colocation based services and all with an underlying goal of keeping 20% or less of your capacity in internal data centers longer term.

How it’s transmitted – There isn’t any real difference in how data is transmitted, but there is a difference in how your network will need to scale and how fungible it might need to be. The performance of mobile devices over a cell or wireless network is difficult if not impossible to manage, so your best bet is to make your network as big and flexible as you can afford. Big applies to where you have I/O to a facility, and fungible applies to how easily you can modify your network capabilities and capacity.  In other words, you want the shortest possible (affordable) distance and you want the biggest pipes with the most manageable contract language. The demands for network are likely to translate to more IP capacity and capability (vs fixed or MPLS type designs), with contracts that allow for low cost, short-term use (1 month), and little or no minimums.

How it’s likely to be used – This area of concern overlaps each of the previous two bullets. The method of use will dictate network capability, along with data center locations and scale. Again, we’re only talking about the facility, not the IT infrastructure.

Supernap data center image.

How it’s likely to grow – This folks is the proverbial billion dollar question. Many have written about the potential explosion in technology use as a result of cloud computing (see Jevons paradox or even my old blog), mobility, wearable computing, and the Internet of Things (IoT).  I think it’s safe to say that we still don’t have a clue as to how big our technology footprint will become in coming years, but it’s likely to get very big, in the neighborhood of quadrupling from its current total spend by 2023. So even though we’re seeing miniaturization, greater compute density with lower power use, and increased disk and memory capacities, we will still see our environments as a whole more than double in size over the next ten years. So while mobility doesn’t necessarily change data center design, it could definitely contribute to your scale requirements. Considering the fact that the expected impact on scale of this explosion in technology isn’t easily quantified, or timed, it seems to further justify distributing your risk relative to data center ownership.  Lastly, as the demand for compute accelerates, the need to save resources, space, sustainability and cost will continue to drive compute density. Therefore an indirect impact from mobility, and cloud, etc., will be a greater need for data centers that can handle high density environments of 25kW or greater per cabinet.

So the simple answer is Yes and No

Mobility will definitely impact your data center strategy, but not in the way many are suggesting. Our best option for dealing with the new demands of mobility is to think flexibility. Building your data centers in one-to- three fixed locations based on “expected” future growth of offices and demand can’t possibly take into account the potential changes mobility, cloud & IoT will bring in use characteristics.  Also, by having all your data centers internal, you can’t leverage the ecosystems or network buying power of a strong partner. Apply those lessons you’re learning about Agile IT to your data center capacity ownership strategy. Leverage professional external resources for ecosystem, capacity, location, and network, while focusing on innovation with internal staff. Now you can go back to having nightmares about how your future data center will be calling you at home and asking you “why don’t you have your phone with you Dave, I might need you”.