Cloud Adoption Trends – A Response to “What Economists Can Teach Us” by Bernard Golden

05 May

In the CIO.com article “What Economists Can Teach Us About Cloud Computing”, Bernard Golden paints a fairly well-reasoned picture about the potential future for public cloud versus internally owned IT infrastructure. While I agree with many points in his article, I feel there are a few minor but still critical considerations that need to be made.

The age old argument of Public Cloud vs the other stuff

Bernard and I have had many cloud conversations over the years, but there was one in particular that occurred back in 2011 at a coffee shop in San Carlos California that I think is most germane to the above article. Specifically we talked about whether public cloud would kill any notion of the potential need for or the benefit of private cloud.  Bernard felt at the time that public cloud demonstrated an ease of accessibility and use that far exceeded anything available to internal IT and as such would gain and keep the upper hand and effectively kill private cloud before it could even start.  Private cloud such as it was is too hard to build and too far behind in its feature set compared to public cloud offerings like AWS. My response to him then was as follows; right now the CEO’s of every major hardware vendor and many software vendors are (virtually) sitting around a table asking each other “how do we fight back against public cloud? The only way is to stay relevant to our customers. The best way to stay relevant is to create solutions that allow internal IT to quickly and cost effectively deploy internal IT infrastructure in a way that closely mirrors public cloud usability.

The Economist Theories and Cloud or IT use

I’m a big fan of Jevons Paradox, but I only discovered his theories after having written a piece in 2010 on how cloud computing would dramatically increase the creation and use of new IT solutions. I don’t question Jevons theories, because I believe I proved them on my own before discovering his. I also agree with the reference to Ronald Coase. I’m a huge believer in capturing all costs associated with IT projects, and startup time has to be a factor. So from the economist’s perspective I agree with what Bernard has written. What I disagree with is what I would characterize as an over simplification or maybe just an oversight on how IT is used in the big puzzle of creating opportunities with customers.

Why I believe The Private vs. Public Cloud debate is a false one (one of my blogs)

Private Cloud is often compared unfavorably to public cloud, in fact so unfavorably that you can still hear some people say that “there is no such thing as private cloud”.  Boy, how would you feel if someone said I think so little of you I don’t even think you exist? I’m guessing it doesn’t get much harsher than that.  For and against arguments notwithstanding I’m an advocate for best use of IT, private, public, mainframe, whatever, all that matters is whether it is the right solution for the company and the opportunity at the time.

The assumptions many buyers make about public vs. private cloud revolve around a combination of factors ranging from scalability to ease of use and or ease of adoption. The problem with these assumptions is that they are almost always based on a failed precept that comparing what you have today to public cloud is what should drive your answer. In other words if you compare the legacy environment you have today and how much it costs to run it, replace it, and provision it to a public cloud there’s no doubt that public wins. The problem with that comparison is that it’s a false one. The correct comparison is one that envisions a future state and compares the cost and usability factors of the future state with the long term use of public cloud. It’s also true that real (yes, I said “real”) private cloud is still a baby 12-18 months old, it’s difficult to compare that against the best public cloud infrastructure that is 6-8 years old. In other words, if you’re going to make an assumption about the benefits of public cloud vs. private cloud you need to get as close to apples to apples as you can. Private cloud (per my comment to Bernard in our coffee shop conversation) will be made easier and more cost effective to adopt because there are too many vendors who can’t afford to let anything else happen.

Assuming you’ve made the decision to move to private cloud

How do you compare your use and costs in your private cloud against public cloud offerings? As Bernard pointed out there is the “cost” & “time” associated with acquisition that has to be accommodated, not just the cost of on-going ownership. I agree with Bernard’s assertion in theory, but in practice it doesn’t actually work that way once you’ve made your private cloud investment.

A few very simple scenarios comparing acquisition of compute resources against public cloud

No Private Cloud (legacy infrastructure) No comparison Public cloud wins pretty much every day.
Private Cloud Just Acquired (1st use case) No comparison Public cloud wins. Just think about it, you buy some converged infrastructure and train/re-org the staff and then provide services or you could go out and buy the services from public cloud
Private cloud (2nd use case and beyond) Very strong comparison You now have infrastructure and services up and running. Each new requirement for an app can be handled quickly and cost effectively against a pro-rated investment. With each new use case your potential comparison or even cost advantage against public cloud gets stronger

The above are very simplistic but real world considerations for helping to make a decision on whether to own or lease your compute capacity. The above scenarios are not an end all comparison either, there are dozens of factors (see here) that need to be considered. All I’m really trying to get across is that you can’t compare a one-time use case situation against long term ownership. All too often when we make comparisons we attempt to convince the buyer that the difficulty level will remain the same with each new request and in the case of private cloud that won’t be true, unless you’re planning on building an individual private cloud for each application you own (“that wouldn’t be prudent”).

In Summary

I’m not a proponent of public, private, or legacy, I’m a proponent for smart purchasing and best fit for the job, the company, the team, and the timing.  So think carefully when reading about what the right answer is and instead put it in the context of ‘how will we own this’ and ‘what does done look like?’.  Bernard, I think it’s time for another coffee shop talk.

 

Why we Value IT Incorrectly – Innovation vs. Cost Center

23 Feb

Possibly the worst measure of IT ever made is the “What percentage of Revenue is IT” measure. That’s right, I said it, it’s the single worst measure ever utilized.

Why do we use this measure?

My belief is that businesses use the percent of revenue measure because it’s easy and it allows them to compare to their peers. We also use this measure because too many of us still think of IT as a mere cost center instead of a potential innovation center and revenue driver. What if you could assign a measure of value to every IT dollar spent? Let’s say that for every dollar spent on IT, the business brought in an additional five dollars in revenue. Would IT still be “too expensive” if it were 10% of revenue instead of 4%?

It’s an old story that won’t go away

In March 2012, the MIT Sloan Management Review came out with research suggesting that companies’ investments in IT increase profitability more than do their investments in advertising or R&D. That’s right—IT increases profitability more than advertising and R&D. I thought it was worth repeating.  Don’t get me wrong, I think some of us are starting to say the right things. However, I still all too often see the purchase decisions that IT leaders make where old assumptions of value are still used.

How do we break the cycle?

For years, a wide range of options have been suggested for IT in order to help break the cycle; simple things include bettering the marketing of IT success stories and improving your ability to sell ideas and projects. What’s not so simple is changing the mindset of the C-Suite and providing real data on the success of projects. In my many, many years of delivering IT projects (most of which required an ROI/TCO study before being initiated), not once did anyone ask me for ROI/TCO results during any period after project completion. I’m sure there are some projects where the executive team has asked for “proven” ROI and TCO results, but I’m guessing they are few and far between.

Why isn’t IT expected to live up to a higher standard in demonstrating the benefits of projects and being more innovative in applying IT to the task of creating business value? I don’t think IT is expected to live to a higher standard because again, the average C-Suite doesn’t expect anything other than cost and risk management.

Strategy for making change

  • The C-Suite needs to do a better job of hiring for innovation and business leadership and put less focus on table stakes like resiliency and cost containment
  • The C-Suite should further foster the right attitude in IT by following their words with action and providing IT with incentives for innovation and business acceleration
  • The CIO should be hired based on 4 (minimum) high level requirements
  1. Strong leadership skills
  2. Demonstrated ability to build and successfully lead innovation oriented teams
  3. History of working closely with each line of business to better understand their day-to-day functions to learn how and where IT solutions can provide new opportunity. Also, to use the exposure to identify and remove IT solutions that impede performance. “Think embedded IT staff”
  4. Shows no fear in the face of the need to make difficult decisions; like pulling the plug on a project after millions have been spent or pulling a solution that isn’t providing appropriate payback

Innovation is like a Viral Video

You can’t just demand innovation, like you can’t just demand that the next company video go viral. However, if you don’t have anyone working on videos, you can bet you’ll never have one go viral; with innovation, it’s the same. Giving your IT team the tools, commitment and resources required to take risks is a minimum requirement. Understanding and accepting that you might get five or more failures for every major success is key. Think of innovation like an investor thinks of a startup: for every 8-10 invested in, one is a blockbuster. In IT, one blockbuster new project can mean the difference between maintaining the status quo and dramatically accelerating the growth of your business.

Go Forth and Foster Innovation!

Plant the seeds of innovation, hire the right people, accept some level of risk, don’t be afraid to make tough decisions in real time, and maximize the opportunity that is locked up in information technology. This doesn’t mean you do everything internally, and it doesn’t mean the opposite. What it means is that your teams learn to accept the notion that innovative thinking can come from anywhere; how you leverage it is what is really important. We need to stop thinking about IT from a cost perspective alone and start considering it as an engine of innovation; otherwise, you will be left behind by your smarter competitor.

Additional Topical Links:

Future Proofing you IT Decisions

Data Center as Growth Enabler – Can’t just think of it as a room for computers

There’s more to a great data center than meets the average eye!

Blog Professionally Copy Edited by Kestine Thiele (@ImKestinaMarie)

 

Data Center Trends Part III of IV – The Importance of the Ecosystem

09 Dec

Solving business problems in real time with a myriad of choices all vetted, secure, on premise and independent; what CIO would want or need that?

Not “Same as it ever was”

CIOs today aren’t moving out of their internal data center so that they can move into another data center that looks and feels the same.  Generally speaking, the original assumptions around why you would utilize colocation are that you would want to avoid the capital outlay and that you would want flexibility in managing your capacity. The CIOs I speak with today are moving from internal facilities for many of the old reasons—but also with one additional critical factor, a strong ecosystem they can leverage.

The Data Center is Dying

The difference between being in an internal data center and being a part of a large diverse independent ecosystem is simple. You can’t expect five different cloud vendors to build infrastructure in or right next to your data center and you most definitely can’t expect fifty. You can’t assume that five or more storage-as-a-service providers will build options for you in your data center any more than you could assume big data-as-a-service offerings will take up residence there.  The reason you can’t expect the aforementioned services to materialize in your internal data center is simple—there isn’t the scale.

When independent choice, agility, security and cost matter

A strong data center oriented technology ecosystem is a powerful enabler for your business agility. It can be the difference between one option and twenty, between having cost competiveness and not having it, and between having a solution up and running in hours vs. days vs. weeks or longer.  Why settle for a pragmatic “good-enough” solution when you can make a selection from a menu of options. In fact, you can make a selection of many in order to most effectively solve specific business performance, compliance, and cost needs. At a minimum, having choice helps to future proof your decision.

The next time you’re looking                                                           

Consider the capabilities that having agility and independence at all levels of your infrastructure from hands-on-support to connectivity across the world to IaaS cloud options (etc.) will bring to your business.  If a colocation company can’t offer you these options, you’re potentially hobbling your business.

 

Professionally Copy Edited by @imkestinamarie (Kestine Thiele)

 

Data Center Trends Part II of IV – A Future-proofed Decision

11 Nov

Data Center decisions are almost always choices that stick with an organization for years. In most cases few companies that aren’t shrinking are ever in any one data center facility for less than five years and most are in them for ten to twenty years.  Considering the current rate of change in IT solutions and service offerings, it seems ludicrous to accept the idea that you can make a data center buy or lease decision without future-proofing it.  You’re assuming the data center will offer everything needed through all expected and unexpected change during a stay a of ten years or longer.  As I said, “it seems ludicrous,” yet it happens dozens of times every day.

The data center is the beating heart of your organization and its ability to function in the digital age. Whether it’s your internal data center, capacity you lease, or a hosting provider, the requirement is the same: your data center must be adaptable and ready for what’s next.  Components of a Data Center Selection, Use and Ownership Strategy (yes I’ve written that document before) for a company must include the following and more:

Connectivity (in spades) See Consider Connectivity Compulsory
Ecosystem of independent providers

 

Capacity for Growth

  • Power, space, cooling in an expanding campus environment
  • Office capacity for the potential need of collocated staff

 

Adaptable Capability

 

Leadership in Industry Participation

  • New services in the ecosystem
  • Certifications and the ability to obtain almost any newly defined certs
  • Data Center designs that demonstrate clearly a focus on continuous improvement

 

Sustainability – Green

  • Facilities and business strategy that can help your business realize internal goals for corporate sustainability now and into the future

 

Buying Power

  • The ability to leverage a larger ecosystem of customers and partners that can act together to buy products and services at lower cost, with better contract language

 

Etc., etc.

If the above considerations aren’t in your RFP for the next data center(s) you plan to obtain capacity in, you’re doing your business a disservice.  If as the internal provider we can’t be sure of the future three years out, the least we can do is provide a platform that we have confidence will be able to adapt to our needs in the future.  Future-proofed!

 

Data Center Trends – Consider Connectivity Compulsory

03 Nov

Where’s the Data in the Data Center selection process? 

#1 of 4 in the  series Data Center Trends

With the data center being recognized more and more as a critical piece of the infrastructure to enable modern IT, it seems time that we start considering these facilities as more than just special buildings.  At its core a data center is a facility for housing compute, storage and network infrastructure, which is then used to run all the applications that help your business function.  However, limiting your thinking to it being a special “building” for your IT solutions misses several critical factors.

Connectivity as Critical Data Center Capacity or Location Selection Factor

Cisco says that Cloud traffic alone is expected to grow at a CAGR of 35% per year through 2017. The 35% a year number is bad enough but that estimate doesn’t take in to account IoT, Mobility, or generic data growth in legacy environments. So, what do these potential data growth and mobility requirements mean, they mean you can’t begin to consider your colocation or site location options without taking into account connectivity.  If you have less than five (5) providers you likely won’t have true price contestability. If you’re the only buyer you can’t compete with ecosystems who can buy as a group. If you only have a few network providers to select from you’re more likely to make deals with the devil (pragmatic choices) on network design and performance.  The network can make or break the deal relative to where you’ve put your IT gear, it’s just that simple. This isn’t an abstract discussion it is in fact a cost of doing business and an enablement discussion. Depending on size, network savings for many businesses can be worth millions, and from an enablement perspective, the right options can mean improved contract language, faster time to market, and happier customers through improved performance and capacity. You may also find that improved performance and capacity lead to new and improved customer or employee experiences.

We need to Stop!

We can’t continue thinking of the data center as an independent silo’d function; it is the heart to the body, and needs to be considered as such.

 

Break the Cycle: How a CIO Shift to Strategic Management can Eliminate IT Hero and Firefighter Mentality

24 Sep

Fire fighterThe hero culture is alive and well in IT. They’re sometimes known as the “Firefighters.” These are the heroes who come in at all hours of the day and night to put water on the latest IT fire. In many organizations, a “good” fire fighter is admired and appreciated more than a good developer or other IT contributor. Why shouldn’t they be admired? They come in at 2:00 AM on Sunday and resolve a major failure that was interrupting business. What’s not to like?

Well, let me first say that I don’t have anything against the Fire Fighter. They are very martyr-like, and it can be easy to appreciate that quality, as long as it doesn’t come with a bomb vest. What I don’t like is the culture of fire fighting that we as IT leaders perpetuate. The real issue is whether or not someone in the organization has the courage to get off the exercise wheel for a minute and say, “This has got to stop.” As a general rule, those closest to the issue (fire is a great example) aren’t in the best position to determine how to avoid similar issues in the future.

The following is an almost verbatim conversation I had with a Global Director of Infrastructure I worked with for a short time:

Me: We’re not making progress on our project to integrate NewCo. We’ve got to find a way to reduce the level of interrupt-related work the team is tasked to perform.

Director of Global Infrastructure; I’m sorry, but I don’t have time to deal with this right now, we’re all too busy fighting fires. IT Hero

Do you see any problems with this conversation? How will you ever have time to do productive work if all you’re ever doing is fixing messes that were created because you didn’t have time to do productive work!

It may sound odd, but sometimes laziness has its place in the business. Some of the best IT folks I know work very hard, but they don’t realize they’re working hard because the work they’re doing is helping them avoid work they don’t like doing. Every IT organization needs a few leaders and contributors who can look at the job at hand and say, “How can I fix this so I never have to do it again?”

If you celebrate the contributions of firefighter martyrs, you are to some extent, rewarding bad behavior. There may be a wide range of reasons for the fires in the first place, but you certainly don’t want to make it worse by establishing the wrong success motivators.

In closing, I suggest stepping back from the fire, no matter how fierce the heat is and look for the true organizational and technological root causes. I believe you’ll find cooperation from your customers as well. Get them involved by explaining your “root cause resolution” plans and explain how in the short term they might see some delays in typical response times. Once you’ve found the root causes and fixed them, you’ll be in a much stronger position to bring real value to the business, instilling pride in your team and increasing job satisfaction and therefore, employee retention.

So the next time you see a fire you can get out a stick and a marshmallow and put away the fire extinguisher.

Professionally Copy Edited by Kestine Thiele (@imkestinamarie)

 

How the Technology Ecosystem Puts Power Back in the Hands of the CIO

15 Sep

power in handThe Good: The market has never been flusher with interesting and important tech for the CIO to choose from.

The Bad: The market has never been flusher with interesting and important tech for the CIO to choose from.

The fact that the modern CIO has so much to choose from often makes his/her job increasingly difficult.  Think about how many of us struggle to choose when confronted with a lengthy menu at a restaurant; and that’s just to choose between Kung Pau Shrimp and Mongolian Beef. Now consider that same problem only where the risk is spending millions of dollars on a choice that might never work—or worse, negatively impact business performance.

Technology solution adoption is accelerating while technology choices increase – a compounding effect

The fact is that in the technology space, there are many solutions, technologies, and service options for every opportunity you’re working on. The difficulty of making distinguishing one option from all the others is compounded by the effect of which of these choices best fits with the choices you made for any or all of the parallel and or integrated systems.

The traditional approach to IT solution acquisition (especially at scale) is broken

Historically, the vast majority of mature IT organizations in need of an innovative solution would do some semblance of a Request for Proposal (RFP). This RFP would go out to the usual suspects: usually 3-5 partners/vendors. Once the responses came in, there would be some work with Proof of Concept (POC). Sometimes the POC would be done before RFPs were created, and sometimes they’re done only with the top RFP respondents.  Unfortunately, there are problems with the traditional approach.

Problems with the traditional approach:

Time – the investment in time to research multiple products has a direct impact on the value equation for adoption. In case you haven’t heard me say it before, fast following is not the way to win the race; and let’s face it, business success is a race.  It’s also true that the longer you wait to adopt a new solution, the worse your ROI becomes.  In other words, you’re technically bleeding cash (opportunity) every day you waste during the adoption of a new solution.

Acquisition Team – Acquisition Team – the folks involved in making the acquisition are often not appropriately equipped to determine the qualifications of complex and rapidly changing solutions. Let’s face it translating a new technology into your existing IT architecture is hard enough, but what about how it fits with corporate strategy? Having partners and or tools to facilitate this process is a must for most organizations.

Validation of Capacity & Deliverability to Requirements – in today’s large scale IT environments, you can be making choices that might saddle your company with a solution that will cost much more than it needs to, or it could have a service limitation that wasn’t considered appropriately. The CFO doesn’t want to hear about how cheap a solution is, they want to know “how much it will cost every month.” The business leadership would want to know that it accelerates their business and won’t ever be in the way of an opportunity.

How can you mitigate your risk during the acquisition process?

The modern IT environment requires easy access to new solutions for a number of reasons not the least of which is agility. However, as indicated above, the traditional approach is the enemy of agility.  So, if you need agility, and you need to manage your costs and solution requirements what can you do? You must have access, access, access, and then maybe a little more access.

Access to solutions in a diverse and in-proximity environment is a foundational must. Without proximity, you can’t possibly adopt, change, increase, or decrease your selection choices as quickly, safely or cost effectively. In the modern data center, you should be able to attach to any one of a handful of different providers in any solution space all in the same building. You should be able to build and destroy network connections to different providers in real time with little or no cost impact and do it securely.  You also need to know that any data you move or grant access to will be protected, and you can quickly recover or update it when necessary.  Just as importantly, you need to be able to plan your acquisition to fit the way you do business, with the appropriate balance of cost,  service comparison, performance review and finally contract language.

As I said earlier, most CFOs will ask several questions about your planned purchase, but it’s almost always guaranteed that one of those questions will be, “What’s our cost per month?”  If your answer is, “It depends,” then you likely have a non-starter.

The problems associated with using the traditional IT acquisition strategy are a big part of why the Rob Roy decided to build a powerful independent IT ecosystem and why we brought 6fusion in to improve access.  When you marry the scale and partner ecosystem associated with being in a SUPERNAP to the strengths of the 6fusion solution, you begin to recognize how important this relationship will become for many businesses.

Whether you need to buy at scale, put a hedge in on potential future demand, and or want clarity on how what you bought is being used the 6fusion tool set will be a major benefit.

Take the time to save some time

Take a few hours to understand how the acquisition of new technology services can be improved by being a part of a strong technology ecosystem like the SUPERNAP’s, while utilizing tools that streamline and strengthen your ability to make the best choices for your business.  If you do you won’t just save time, you’ll reduce your risk and better manage your cost, while keeping both the CFO and your executive officers happy.

 

Data Center Efficiency – We’re Done Here!

27 Jun

were done hereIn 2002, vmware server virtualization began making inroads in the data center helping customers reduce the number of physical servers they deployed. During the same time period as the introduction of vmware, there began to be little one off efforts to improve the efficiency of data centers, but there wasn’t a group or common metric involved.  Then in 2006, The Green Grid (TGG) was formed and shortly thereafter, they released the new data center metric called PUE (Power Usage Effectiveness). Subsequently TGG released other useful metrics like WUE and CUE, along with the Data Center Maturity Model.  The EU created the Data Center Code of Conduct and ASHRAE began loosening the standard for humidity and temperature ranges in the data center. There has also been a boom in the use of outside air which had a direct impact on reducing the energy use of one of the biggest users (HVAC).  All of the above and many more data center facility innovations occurred between 2002 and 2012. So certainly now that it’s 2014, 12 years after vmware server was introduced it must be time to claim “We’re Done Here!”?

We’re NOT Done, Not Even Close

In an article published by DRT on July 15, 2013, they discussed the results of a PUE survey of over 300 data centers. The survey results indicated an average PUE of 2.9. Yet in 2011 the average PUE reported by Uptime was 1.89. In another Uptime related survey, there is indication that our efforts at reducing PUE were hitting diminishing returns and may even be headed back up.  Either way, the average PUE should be down to 1.5 or lower by now and continuing to drop but they aren’t.

 

PUE

Image from Enterprise Tech Data Center Edition

 

Regardless of which set of data you believe, the sad fact is that as an industry, we’ve made very little real progress over the last 12 years. Sure there is a lot of positive noise from a few players (SUPERNAP 1.18) like Google, Yahoo, Microsoft, HP, and several other Fortune 100 companies. What’s missing is real progress in the other 90% of our data centers and engineering labs (AKA Data Centers).

Without Improvement we will get Regulated or Embarrassed into Change

We can’t hide forever. Greenpeace has already targeted the big guys, so it likely won’t be long before they realize the potential that’s still locked in the other 95% of data center capacity around the world.  If it’s not pressure from Greenpeace, then it will likely be pressure from your government. In the UK they are already facing this issue of carbon emission taxation on Data Centers, how long before our countries follow suit.

The point is simple; running a data center effectively and efficiently takes dedication, persistence, a cross functional organization, and a specific set of skills and vision.

To the above point, most companies (90%+) don’t run their data centers effectively or efficiently.  It’s not like there aren’t great tools, training, and resources—quite the contrary. We have tools available today that I could have only dreamed of in 2002. We have Building Management Systems (BMS), Data Center Infrastructure Management (DCIM) solutions, and power readings at the PDU. There are wired and wireless sensors for everything from sever location to outside air quality and everything in between. We have the use of outside air, and we can raise chilled water temperatures along with 100 other options that are all easily available to us on the internet, at conferences or via great organizations like TGG, Data Center Pulse, Open Data Center, Open Compute, and ASHRAE.  So, why? Why are we still running our data centers like 17 year olds with their first car?

In order to bring some outside opinion into this topic, I asked a simple question of the Twitterverse:

“Have we made the progress on Data Center Efficiency that we should have over the last 12 yrs?”

Here are a few of the responses:

 

 

 

 

 

Why aren’t we doing a better job?

It’s not the fault of any one role or person in a company as much as it’s a generalized problem of assumptions around the meaning of “data center ownership.”  The single biggest inhibitor to greater success in the data center space is the lack of strong organizational support.  In the vast majority of companies the data center manager is a “space” manager, not the owner of a critical function and resource. A “space” manager worries about whether the DC room access is secure, whether there’s enough power to the new racks, and whether or not there are hot spots. All of these “space” management functions are necessary, but they should be functions of a larger responsibility of “ownership.”  We have to face the facts. Whether we’re tree huggers and care about the future or not, if we don’t address the gapping ozone hole that most data centers are, someone else eventually will.

Without the role of Data Center Owner, we’re unlikely to ever make real headway in the majority of data centers anytime soon. The issue is one of focus, risk, and reward.  The current “Data Center Manager” isn’t, s/he’s more of a “Room Custodian” with a different set of skills. Until the data center is viewed and owned as a system (see Data Center Stack) we will continue to focus on point solutions to specific technical and resource issues instead of looking at the larger picture.  There needs to be a single throat to choke in the organization. Consider the situation where a company needs to build a new 100 million dollar manufacturing plant. Is there any doubt the head of manufacturing would be the sole throat to choke? If the CFO or another Exec wanted information on the performance of that expensive facility do you think they’d call four or five different people? The simple answer is no, they wouldn’t, yet that’s exactly what we do with our data center resources today.

A Challenge

I challenge all data center operators to insist on an organizational design that supports the combination of roles and functions needed for successful data center operations and management. Yes, this means the proverbial Facilities vs. IT battle must be fought, and it means that there will be training required for the individual who’s role would be elevated to “Data Center Owner.” The Data Center Owner would be responsible for all aspects of the data center from real estate to generator selection, to the impact of changing technologies, carbon emissions, and long term planning. This is no small job and for most of us who have had a similar role, it’s one that is learned through osmosis over the course of your career. There isn’t a “data center owner” class you can take.

I also challenge all data center operators to hold their partners to a higher standard of reporting, efficiency and sustainability. While managing a partner is much easier than building and operating your own facilities, it doesn’t change the fact that you still have to take responsibility for security, availability, and efficiency of your IT environments.

A Positive Industry Trend

More and more CEOs and CIOs are seeing that owning data centers is often more of an albatross than an advantage.  I see this as positive because I believe it’s more likely that a service provider will be running a more efficient data center.  Does that mean that there isn’t a need for any internal data centers, maybe, but it will often depend on the services or products the company offers.  What’s more important is how getting rid of the data center headache might be an opportunity to position your company more effectively for the future.  There are so many questions that have no good answers right now; how much public vs. private cloud will I be using? What will globalization of my business mean to data center requirements? How will increased regulations and reporting affect my data center capacity? Will modern equipment actually work in my legacy data center? I could go on and on with these questions, but the point is clear. There isn’t a crystal ball telling you what to build, how to build, and where to build; and guessing with your companies hard earned CapEx and corporate reputation seems like a bad strategy.

Yes, I’m Biased

There’s no way I can deny the fact that I’m biased. However, for those of you who know my work with organizations like TGG and Data Center Pulse, you know that I’ve been pushing for industry improvements for years while also making improvements in the environments I was responsible for. I will also say that I jumped from “internal IT” to the vendor side because I didn’t see the commitment to data center excellence that I see where I am today.  Lastly, you don’t have to take my word for it, you’re likely living this problem today, and if you aren’t you only have to ask a few peers and they will corroborate what I’m saying.

It’s Time

It’s time to do the right thing for your company by building a true data center ownership organization. It’s time to consider leveraging the appropriate partners to help you improve while also better positioning your company for success in the modern world of agile IT.

Related Blogs: 

http://datacenterpulse.org/blogs/mark.thiele/data_center_infrastructure_management_wheres_beef_0

http://www.switchscribe.com/data-centers-are-treated-with-less-care-than-a-batch-of-beef-stew/

http://datacenterpulse.org/blogs/mark.thiele/data_center_surprise_data_center_cost_ownership_and_budget_planning

http://www.switchscribe.com/the-biggest-impact-on-it-firefighting-business-agility-data-centers/

http://datacenterpulse.org/blogs/mark.thiele/single_owner_company_data_center_needed

Professionally Copy Edited by Kestine Thiele (@imkestinamarie)

 

Data Centers are treated with less care than a batch of Beef Stew?

29 Apr

Five Critical Steps in Achieving Greater Energy Efficiency in the Data Center

beef stew

 

Achieving greater energy efficiency is a fairly common goal and certainly the subject is well covered in the press and by industry thought leaders. However, I believe that in most enterprises the focus on energy efficiency isn’t system-oriented or holistic. In fact, I’m here to argue that most data centers pay less attention to the outcome of changes than a chef does when adding ingredients or spices to a beef stew.

1.       Treating the Data Center as a System

The idea of treating the data center as a system isn’t new. In fact, Data Center Pulse published the “Data Center Stack” over four years ago, but the idea still hasn’t taken hold in most businesses. Using a systems approach seems harder than the alternative. The assumption is, “If I use the systems approach, I’ll have to communicate, investigate, evaluate, etc., before I make a change or determine the scope of an opportunity.” The aforementioned assumption is correct; however, just like effective change management and strong process, following a systems approach will likely lead to immediate and lasting benefits of efficiency and risk reduction (putting out fires before they start).

Using the systems approach will allow you to adopt strategies that have a holistic and lasting impact on the data center system. Without a systems focus you are just as likely to introduce inefficiency as you are to make a positive change.  A great example of making a change that would appear obvious on the surface is “raising the temperature.”  While it’s very often true that running at a higher temperature will introduce some efficiency gains, let’s look under the covers.  Increasing the temperature because the servers can handle higher heat makes sense as it means the HVAC works less, which in turn reduces your power use. On the other hand, depending on your server mix and the consistency of your environmentals, you might actually be replacing one problem with another. Yes, you’re saving energy on HVAC, but your ICT gear might be using more energy to compensate for the higher heat.  It’s also true that you are introducing a potential risk that should your HVAC fail you won’t have any stored cool air to keep the servers from overheating and shutting down. This HVAC scenario is but one example of how making a seemingly obvious “positive” change might actually do more harm than good.  Using the Stack to evaluate the full system impact of changes is much more likely to lead you to make changes that actually reduce overall energy use, without introducing additional risk.

2.       Embedding Rewards

We all have a specific focus in mind when we’re working on something. We could be looking to make it faster or bigger. Maybe we want to reduce the cost of ownership or simplify how the customer uses a solution, but we don’t always look at how the solution might affect our use of power or energy.  In order for power use to become part of everything we do, our leaders must ingrain it. We can’t act wasteful and talk sustainability.  Like customer service, it needs to become part of who you are. Only then does efficiency and sustainability become ingrained in the activities of the entire team and everything they do.

There are a number of ways leadership can help introduce or reinforce ideas and or new areas of focus. Demonstrating the importance through action is the best way for a leader to communicate, regardless of whether it’s ethics, customer service, or sustainability. While continuing to demonstrate it, the leader must also speak about it regularly. Lastly, team members will truly internalize a new objective and goal set against those objectives it when it becomes part of their reward system.

Caveat on setting new goals for your team; don’t over emphasize the importance of efficiency or power savings over delivering a better product or service to the business. If your team believes that the one thing that will get them noticed is saving energy, they will naturally focus on that at the expense of other activities.

3.       Keeping your eye on the Ball

Regardless of the hype or reality of any specific new focus area, there is no changing the fact that for the vast majority of us, “saving energy” isn’t in our job title. Our company isn’t a “saving energy for you” company, therefore keeping perspective on those things that actually add new value to the business is critical. In other words, be careful what you ask for and be cognizant of where and how your team members are focused on opportunities.  Saving energy or writing efficient code is rarely the primary deliverable for a new application or service; so think requirements and innovation first, and power savings second or third.

4.       Think Sustainability

Most of us think of sustainability as saving a few gallons of gas or recycling cans and bottles. When it comes to data centers there is much, much more to the story.  Sustainability is directly associated with conserving (reducing cost/being greener take your pick) and continuity (as in Business Continuity). When you plan sustainably you are much more likely to build or use solutions that will stand the test of time, while also helping to maintain or improve your businesses corporate image.

Non-Traditional Examples of Data Center Sustainability:

You should build or lease facilities that can handle higher power density per square foot. The fewer buildings you have to build or lease the better, as it’s both green and financially sustainable for your business.

PUE (Power Usage Effectiveness), CUE (Carbon Usage Effectiveness), and WUE (Water Usage Effectiveness) are all great metrics for helping to drive the right behavior. Using the EU Data Center Code of Conduct or The Green Grid Data Center Maturity Model are also great tools to leverage.

Site location has a huge impact as well. Sustainability applies to your ability to maintain the flow of resources required to continue operations. If you build or buy in a place that needs water but might lose access or where there isn’t a good pool of future employees to pick from, you’re running a risk.

5.       Build power savings in to your designs and purchasing processes

As I usually don’t suggest building your own facility, I do highly recommend that you work with partners who have demonstrated with verifiable metrics that they have an excellent track record of energy management and efficiency of use.  If you are going to build, there’s a long litany of considerations that you must make to ensure the entire data center system is designed around efficient use of resource, especially power.  Keep in mind that every resource you use has an energy and or water factor associated with it. When you take that energy and water factor into account you’ll understand the importance of efficiency and its associated impact on sustainability.

Purchasing should also be involved in your strategy to become more of an energy sipper than a guzzler. Purchasing doesn’t always have visibility into the underlying drivers for one product or service selection over another. Helping them understand the value of the entire lifecycle and supply chain impact of each product or service choice will help them more effectively prioritize decisions against factors other than price.

It’s not just about “a” technology

It would be nice if we could just buy cloud or lease data center space and assume our work was done relative to gaining more efficiency and becoming a more responsible corporate citizen, but you can’t.  It takes much more than a focus on a specific service or technology it takes a cradle to grave systems oriented view of the entire IT/Data Center envelope.

Regardless of whether it’s Beef Stew or a Data Center

Whether making a batch of stew or operating a data center, the fact is, you can’t just add things to the recipe or change it without understanding the potential outcome. In the case of beef stew, you actually have considerably more leeway than you do with a data center, but it’s still a problem if you just change ingredient quantities, add things at the wrong time, or leave something critical out. The end result is that the food won’t taste right. At least with stew you only invest $20 and a few hours and then do it over again if it’s not right. It’s not so easy with a data center.

Professionally Copy Edited by Kestine Thiele (@Imkestinamarie)

 

The Biggest Impact on IT Firefighting & Business Agility – Data Centers

08 Apr

Protection & Value

Is it reasonable to assume that if you’re buying a safe for all our valuables that you’d buy the one that is the best combination of security and cost. This combination of security and cost would be driven by your budget and the value (intrinsic or sentimental) of your precious items. I would guess that the same principle of budget vs. value would apply to protecting your IT environments.

So many places to look, so many holes to patch

The normal enterprise IT environment is filled with hundreds of applications. In most cases each of these applications is supported by unique design at the hardware and software level, if not also at the network layer. The fact that there is so much uniqueness about our IT environments means we expend inordinate amounts of time dealing with common problems in 100 unique ways. Maintaining these environments has become the bane of enterprise IT groups. By now, we’ve all heard the story of how keeping the lights on comprises 70-80% of the IT budget leaving only a small amount for much needed innovation.

Keeping the lights on has several meanings, including the mundane but critical “general maintenance and support” of each environment. However, keeping the lights on could also mean avoiding outages. Generically speaking, all of us in IT attempt to build and maintain environments with the highest possible availability (within budget and available resources).  The problem is that we’re often spending too much time fighting fires of “maintenance & support” and not enough time solving the underlying issues that cause many of the fires or in this case cause many of the outages (same as a fire only worse). Where should IT focus its attention relative to avoiding outages and or reducing the number of fires?

If you can’t focus on everything

Few IT organizations have the luxury of being able to throw as much money or bodies at a problem as they’d like. So, if you have to pick which efforts will provide the most “keeping the lights on” value for the dollar, you should pick something that can be fixed for everything, and fixed once. How is that possible? How can I fix something for everything, isn’t that the opposite of focusing on something important and avoiding getting lost in the fire? The simple answer is no.

Data Center as a Platform (DCaaP)

There are only a few services and solutions that affect everything in IT—one of them is the data center and the other is people.  The two areas of people and data center are where the most value can be gained in virtually any IT organization when it comes to reducing risk and the threat of fires. That’s right, just two areas, both of which can be worked on with minimal impact to active production environments and in most cases without too much additional expense.

It’s well documented that humans are almost always the biggest single risk factor to the availability of systems. The more humans need to be involved, the more likely a mistake will get made and a failure will occur. We all talk about hardware failure and power failures, even viruses and software bugs, but if you want to reduce risk, you reduce the human touch factor.  The simple answer is that you need a combination of three things: good leadership, excellent process/automation, and solid training.

When it comes to owning and operating a data center as a system, it begins to get a little more complex. Most organizations fail to treat the data center as a system and are constantly dealing with components or services independent of the DCaaP.  While there are hundreds of discrete components and services that make up a functioning data center, it is no different than how you might work on a car. You don’t talk about replacing the tires on your car without considering whether they will fit on the rims, fit in the wheel well, or cause the handling to change. The same holds true with a data center as there is virtually nothing in a data center that can be changed without having some effect on the performance of the system.

Some of the more well-known discrete components of a data center include power, HVAC, security, water, and environment (I.e., humidity, cleanliness, & temp). Any one of these areas could be cause for fires in IT but all of them together combine for high overhead if they aren’t expertly managed. Oft overlooked is networking/connectivity. Which, while a part of the data center, connectivity is also an underlying service bridging all applications. All of the aforementioned DC services, including connectivity should be considered part of DCaaP. Imagine if instead of buying a bunch of air conditioners, routers, UPS units, PDUs, racks, sensors, ducting, cable, ladder rack, etc., you could instead buy a package? This likely isn’t news to anyone, but that’s what a colocation provider is supposed to offer; Data Center-as-a-Platform. Not all colocation providers are created equal, so just moving from your data center to theirs won’t necessarily solve any problems and in fact could cause new ones. The real opportunity is moving to a colocation provider that can improve your level of service by driving down the risk of fires to zero (0) and increasing your ability to address new opportunities.

Imagine improving customer and employee satisfaction, while giving your employer better tools, simultaneously lowering costs and reducing your carbon footprint.

Remove the issue and focus on business enabling priorities

Generally speaking I don’t believe in washing my hands of a problem, I prefer fixing it. However, building and operating data centers at the highest levels of efficiency, performance and availability isn’t for the faint of heart. As mentioned earlier, the majority of businesses don’t have the organizational alignment that allows for building and running first class data centers. It’s also true that building a data center means locking in a 15 year business plan and CapEx investment that isn’t easily adjusted on the fly. In the modern IT space, building a 15 year business plan and locking in a bunch of CapEx isn’t conducive to agility.

Reduce your overhead and fire risks while improving agility and lowering costs

Find a data center partner that makes it their life’s work to provide their customers the equivalent of an Indy car for agility, a Tesla for efficiency and safety, and an armored car for security in the form of the most efficient data centers with Tier IV Gold availability ratings combined with connectivity options beyond compare. What else can you do in IT that will reduce your overhead and put out many of your fires while also improving agility and lowering costs?  Another way of looking at this opportunity is that you’re helping to “future proof” your investments. When it comes to value, what better way to obtain value than by actually improving your operational capability and agility? Wouldn’t you rather focus on capability and agility first and have efficiency go up while costs go down, all as a side effect?

 

Professionally copy edited by Kestine Thiele (@imkestinamarie)