Break the Cycle: How a CIO Shift to Strategic Management can Eliminate IT Hero and Firefighter Mentality

24 Sep

Fire fighterThe hero culture is alive and well in IT. They’re sometimes known as the “Firefighters.” These are the heroes who come in at all hours of the day and night to put water on the latest IT fire. In many organizations, a “good” fire fighter is admired and appreciated more than a good developer or other IT contributor. Why shouldn’t they be admired? They come in at 2:00 AM on Sunday and resolve a major failure that was interrupting business. What’s not to like?

Well, let me first say that I don’t have anything against the Fire Fighter. They are very martyr-like, and it can be easy to appreciate that quality, as long as it doesn’t come with a bomb vest. What I don’t like is the culture of fire fighting that we as IT leaders perpetuate. The real issue is whether or not someone in the organization has the courage to get off the exercise wheel for a minute and say, “This has got to stop.” As a general rule, those closest to the issue (fire is a great example) aren’t in the best position to determine how to avoid similar issues in the future.

The following is an almost verbatim conversation I had with a Global Director of Infrastructure I worked with for a short time:

Me: We’re not making progress on our project to integrate NewCo. We’ve got to find a way to reduce the level of interrupt-related work the team is tasked to perform.

Director of Global Infrastructure; I’m sorry, but I don’t have time to deal with this right now, we’re all too busy fighting fires. IT Hero

Do you see any problems with this conversation? How will you ever have time to do productive work if all you’re ever doing is fixing messes that were created because you didn’t have time to do productive work!

It may sound odd, but sometimes laziness has its place in the business. Some of the best IT folks I know work very hard, but they don’t realize they’re working hard because the work they’re doing is helping them avoid work they don’t like doing. Every IT organization needs a few leaders and contributors who can look at the job at hand and say, “How can I fix this so I never have to do it again?”

If you celebrate the contributions of firefighter martyrs, you are to some extent, rewarding bad behavior. There may be a wide range of reasons for the fires in the first place, but you certainly don’t want to make it worse by establishing the wrong success motivators.

In closing, I suggest stepping back from the fire, no matter how fierce the heat is and look for the true organizational and technological root causes. I believe you’ll find cooperation from your customers as well. Get them involved by explaining your “root cause resolution” plans and explain how in the short term they might see some delays in typical response times. Once you’ve found the root causes and fixed them, you’ll be in a much stronger position to bring real value to the business, instilling pride in your team and increasing job satisfaction and therefore, employee retention.

So the next time you see a fire you can get out a stick and a marshmallow and put away the fire extinguisher.

Professionally Copy Edited by Kestine Thiele (@imkestinamarie)

 

How the Technology Ecosystem Puts Power Back in the Hands of the CIO

15 Sep

power in handThe Good: The market has never been flusher with interesting and important tech for the CIO to choose from.

The Bad: The market has never been flusher with interesting and important tech for the CIO to choose from.

The fact that the modern CIO has so much to choose from often makes his/her job increasingly difficult.  Think about how many of us struggle to choose when confronted with a lengthy menu at a restaurant; and that’s just to choose between Kung Pau Shrimp and Mongolian Beef. Now consider that same problem only where the risk is spending millions of dollars on a choice that might never work—or worse, negatively impact business performance.

Technology solution adoption is accelerating while technology choices increase – a compounding effect

The fact is that in the technology space, there are many solutions, technologies, and service options for every opportunity you’re working on. The difficulty of making distinguishing one option from all the others is compounded by the effect of which of these choices best fits with the choices you made for any or all of the parallel and or integrated systems.

The traditional approach to IT solution acquisition (especially at scale) is broken

Historically, the vast majority of mature IT organizations in need of an innovative solution would do some semblance of a Request for Proposal (RFP). This RFP would go out to the usual suspects: usually 3-5 partners/vendors. Once the responses came in, there would be some work with Proof of Concept (POC). Sometimes the POC would be done before RFPs were created, and sometimes they’re done only with the top RFP respondents.  Unfortunately, there are problems with the traditional approach.

Problems with the traditional approach:

Time – the investment in time to research multiple products has a direct impact on the value equation for adoption. In case you haven’t heard me say it before, fast following is not the way to win the race; and let’s face it, business success is a race.  It’s also true that the longer you wait to adopt a new solution, the worse your ROI becomes.  In other words, you’re technically bleeding cash (opportunity) every day you waste during the adoption of a new solution.

Acquisition Team – Acquisition Team – the folks involved in making the acquisition are often not appropriately equipped to determine the qualifications of complex and rapidly changing solutions. Let’s face it translating a new technology into your existing IT architecture is hard enough, but what about how it fits with corporate strategy? Having partners and or tools to facilitate this process is a must for most organizations.

Validation of Capacity & Deliverability to Requirements – in today’s large scale IT environments, you can be making choices that might saddle your company with a solution that will cost much more than it needs to, or it could have a service limitation that wasn’t considered appropriately. The CFO doesn’t want to hear about how cheap a solution is, they want to know “how much it will cost every month.” The business leadership would want to know that it accelerates their business and won’t ever be in the way of an opportunity.

How can you mitigate your risk during the acquisition process?

The modern IT environment requires easy access to new solutions for a number of reasons not the least of which is agility. However, as indicated above, the traditional approach is the enemy of agility.  So, if you need agility, and you need to manage your costs and solution requirements what can you do? You must have access, access, access, and then maybe a little more access.

Access to solutions in a diverse and in-proximity environment is a foundational must. Without proximity, you can’t possibly adopt, change, increase, or decrease your selection choices as quickly, safely or cost effectively. In the modern data center, you should be able to attach to any one of a handful of different providers in any solution space all in the same building. You should be able to build and destroy network connections to different providers in real time with little or no cost impact and do it securely.  You also need to know that any data you move or grant access to will be protected, and you can quickly recover or update it when necessary.  Just as importantly, you need to be able to plan your acquisition to fit the way you do business, with the appropriate balance of cost,  service comparison, performance review and finally contract language.

As I said earlier, most CFOs will ask several questions about your planned purchase, but it’s almost always guaranteed that one of those questions will be, “What’s our cost per month?”  If your answer is, “It depends,” then you likely have a non-starter.

The problems associated with using the traditional IT acquisition strategy are a big part of why the Rob Roy decided to build a powerful independent IT ecosystem and why we brought 6fusion in to improve access.  When you marry the scale and partner ecosystem associated with being in a SUPERNAP to the strengths of the 6fusion solution, you begin to recognize how important this relationship will become for many businesses.

Whether you need to buy at scale, put a hedge in on potential future demand, and or want clarity on how what you bought is being used the 6fusion tool set will be a major benefit.

Take the time to save some time

Take a few hours to understand how the acquisition of new technology services can be improved by being a part of a strong technology ecosystem like the SUPERNAP’s, while utilizing tools that streamline and strengthen your ability to make the best choices for your business.  If you do you won’t just save time, you’ll reduce your risk and better manage your cost, while keeping both the CFO and your executive officers happy.

 

Data Center Efficiency – We’re Done Here!

27 Jun

were done hereIn 2002, vmware server virtualization began making inroads in the data center helping customers reduce the number of physical servers they deployed. During the same time period as the introduction of vmware, there began to be little one off efforts to improve the efficiency of data centers, but there wasn’t a group or common metric involved.  Then in 2006, The Green Grid (TGG) was formed and shortly thereafter, they released the new data center metric called PUE (Power Usage Effectiveness). Subsequently TGG released other useful metrics like WUE and CUE, along with the Data Center Maturity Model.  The EU created the Data Center Code of Conduct and ASHRAE began loosening the standard for humidity and temperature ranges in the data center. There has also been a boom in the use of outside air which had a direct impact on reducing the energy use of one of the biggest users (HVAC).  All of the above and many more data center facility innovations occurred between 2002 and 2012. So certainly now that it’s 2014, 12 years after vmware server was introduced it must be time to claim “We’re Done Here!”?

We’re NOT Done, Not Even Close

In an article published by DRT on July 15, 2013, they discussed the results of a PUE survey of over 300 data centers. The survey results indicated an average PUE of 2.9. Yet in 2011 the average PUE reported by Uptime was 1.89. In another Uptime related survey, there is indication that our efforts at reducing PUE were hitting diminishing returns and may even be headed back up.  Either way, the average PUE should be down to 1.5 or lower by now and continuing to drop but they aren’t.

 

PUE

Image from Enterprise Tech Data Center Edition

 

Regardless of which set of data you believe, the sad fact is that as an industry, we’ve made very little real progress over the last 12 years. Sure there is a lot of positive noise from a few players (SUPERNAP 1.18) like Google, Yahoo, Microsoft, HP, and several other Fortune 100 companies. What’s missing is real progress in the other 90% of our data centers and engineering labs (AKA Data Centers).

Without Improvement we will get Regulated or Embarrassed into Change

We can’t hide forever. Greenpeace has already targeted the big guys, so it likely won’t be long before they realize the potential that’s still locked in the other 95% of data center capacity around the world.  If it’s not pressure from Greenpeace, then it will likely be pressure from your government. In the UK they are already facing this issue of carbon emission taxation on Data Centers, how long before our countries follow suit.

The point is simple; running a data center effectively and efficiently takes dedication, persistence, a cross functional organization, and a specific set of skills and vision.

To the above point, most companies (90%+) don’t run their data centers effectively or efficiently.  It’s not like there aren’t great tools, training, and resources—quite the contrary. We have tools available today that I could have only dreamed of in 2002. We have Building Management Systems (BMS), Data Center Infrastructure Management (DCIM) solutions, and power readings at the PDU. There are wired and wireless sensors for everything from sever location to outside air quality and everything in between. We have the use of outside air, and we can raise chilled water temperatures along with 100 other options that are all easily available to us on the internet, at conferences or via great organizations like TGG, Data Center Pulse, Open Data Center, Open Compute, and ASHRAE.  So, why? Why are we still running our data centers like 17 year olds with their first car?

In order to bring some outside opinion into this topic, I asked a simple question of the Twitterverse:

“Have we made the progress on Data Center Efficiency that we should have over the last 12 yrs?”

Here are a few of the responses:

 

 

 

 

 

Why aren’t we doing a better job?

It’s not the fault of any one role or person in a company as much as it’s a generalized problem of assumptions around the meaning of “data center ownership.”  The single biggest inhibitor to greater success in the data center space is the lack of strong organizational support.  In the vast majority of companies the data center manager is a “space” manager, not the owner of a critical function and resource. A “space” manager worries about whether the DC room access is secure, whether there’s enough power to the new racks, and whether or not there are hot spots. All of these “space” management functions are necessary, but they should be functions of a larger responsibility of “ownership.”  We have to face the facts. Whether we’re tree huggers and care about the future or not, if we don’t address the gapping ozone hole that most data centers are, someone else eventually will.

Without the role of Data Center Owner, we’re unlikely to ever make real headway in the majority of data centers anytime soon. The issue is one of focus, risk, and reward.  The current “Data Center Manager” isn’t, s/he’s more of a “Room Custodian” with a different set of skills. Until the data center is viewed and owned as a system (see Data Center Stack) we will continue to focus on point solutions to specific technical and resource issues instead of looking at the larger picture.  There needs to be a single throat to choke in the organization. Consider the situation where a company needs to build a new 100 million dollar manufacturing plant. Is there any doubt the head of manufacturing would be the sole throat to choke? If the CFO or another Exec wanted information on the performance of that expensive facility do you think they’d call four or five different people? The simple answer is no, they wouldn’t, yet that’s exactly what we do with our data center resources today.

A Challenge

I challenge all data center operators to insist on an organizational design that supports the combination of roles and functions needed for successful data center operations and management. Yes, this means the proverbial Facilities vs. IT battle must be fought, and it means that there will be training required for the individual who’s role would be elevated to “Data Center Owner.” The Data Center Owner would be responsible for all aspects of the data center from real estate to generator selection, to the impact of changing technologies, carbon emissions, and long term planning. This is no small job and for most of us who have had a similar role, it’s one that is learned through osmosis over the course of your career. There isn’t a “data center owner” class you can take.

I also challenge all data center operators to hold their partners to a higher standard of reporting, efficiency and sustainability. While managing a partner is much easier than building and operating your own facilities, it doesn’t change the fact that you still have to take responsibility for security, availability, and efficiency of your IT environments.

A Positive Industry Trend

More and more CEOs and CIOs are seeing that owning data centers is often more of an albatross than an advantage.  I see this as positive because I believe it’s more likely that a service provider will be running a more efficient data center.  Does that mean that there isn’t a need for any internal data centers, maybe, but it will often depend on the services or products the company offers.  What’s more important is how getting rid of the data center headache might be an opportunity to position your company more effectively for the future.  There are so many questions that have no good answers right now; how much public vs. private cloud will I be using? What will globalization of my business mean to data center requirements? How will increased regulations and reporting affect my data center capacity? Will modern equipment actually work in my legacy data center? I could go on and on with these questions, but the point is clear. There isn’t a crystal ball telling you what to build, how to build, and where to build; and guessing with your companies hard earned CapEx and corporate reputation seems like a bad strategy.

Yes, I’m Biased

There’s no way I can deny the fact that I’m biased. However, for those of you who know my work with organizations like TGG and Data Center Pulse, you know that I’ve been pushing for industry improvements for years while also making improvements in the environments I was responsible for. I will also say that I jumped from “internal IT” to the vendor side because I didn’t see the commitment to data center excellence that I see where I am today.  Lastly, you don’t have to take my word for it, you’re likely living this problem today, and if you aren’t you only have to ask a few peers and they will corroborate what I’m saying.

It’s Time

It’s time to do the right thing for your company by building a true data center ownership organization. It’s time to consider leveraging the appropriate partners to help you improve while also better positioning your company for success in the modern world of agile IT.

Related Blogs: 

http://datacenterpulse.org/blogs/mark.thiele/data_center_infrastructure_management_wheres_beef_0

http://www.switchscribe.com/data-centers-are-treated-with-less-care-than-a-batch-of-beef-stew/

http://datacenterpulse.org/blogs/mark.thiele/data_center_surprise_data_center_cost_ownership_and_budget_planning

http://www.switchscribe.com/the-biggest-impact-on-it-firefighting-business-agility-data-centers/

http://datacenterpulse.org/blogs/mark.thiele/single_owner_company_data_center_needed

Professionally Copy Edited by Kestine Thiele (@imkestinamarie)

 

Data Centers are treated with less care than a batch of Beef Stew?

29 Apr

Five Critical Steps in Achieving Greater Energy Efficiency in the Data Center

beef stew

 

Achieving greater energy efficiency is a fairly common goal and certainly the subject is well covered in the press and by industry thought leaders. However, I believe that in most enterprises the focus on energy efficiency isn’t system-oriented or holistic. In fact, I’m here to argue that most data centers pay less attention to the outcome of changes than a chef does when adding ingredients or spices to a beef stew.

1.       Treating the Data Center as a System

The idea of treating the data center as a system isn’t new. In fact, Data Center Pulse published the “Data Center Stack” over four years ago, but the idea still hasn’t taken hold in most businesses. Using a systems approach seems harder than the alternative. The assumption is, “If I use the systems approach, I’ll have to communicate, investigate, evaluate, etc., before I make a change or determine the scope of an opportunity.” The aforementioned assumption is correct; however, just like effective change management and strong process, following a systems approach will likely lead to immediate and lasting benefits of efficiency and risk reduction (putting out fires before they start).

Using the systems approach will allow you to adopt strategies that have a holistic and lasting impact on the data center system. Without a systems focus you are just as likely to introduce inefficiency as you are to make a positive change.  A great example of making a change that would appear obvious on the surface is “raising the temperature.”  While it’s very often true that running at a higher temperature will introduce some efficiency gains, let’s look under the covers.  Increasing the temperature because the servers can handle higher heat makes sense as it means the HVAC works less, which in turn reduces your power use. On the other hand, depending on your server mix and the consistency of your environmentals, you might actually be replacing one problem with another. Yes, you’re saving energy on HVAC, but your ICT gear might be using more energy to compensate for the higher heat.  It’s also true that you are introducing a potential risk that should your HVAC fail you won’t have any stored cool air to keep the servers from overheating and shutting down. This HVAC scenario is but one example of how making a seemingly obvious “positive” change might actually do more harm than good.  Using the Stack to evaluate the full system impact of changes is much more likely to lead you to make changes that actually reduce overall energy use, without introducing additional risk.

2.       Embedding Rewards

We all have a specific focus in mind when we’re working on something. We could be looking to make it faster or bigger. Maybe we want to reduce the cost of ownership or simplify how the customer uses a solution, but we don’t always look at how the solution might affect our use of power or energy.  In order for power use to become part of everything we do, our leaders must ingrain it. We can’t act wasteful and talk sustainability.  Like customer service, it needs to become part of who you are. Only then does efficiency and sustainability become ingrained in the activities of the entire team and everything they do.

There are a number of ways leadership can help introduce or reinforce ideas and or new areas of focus. Demonstrating the importance through action is the best way for a leader to communicate, regardless of whether it’s ethics, customer service, or sustainability. While continuing to demonstrate it, the leader must also speak about it regularly. Lastly, team members will truly internalize a new objective and goal set against those objectives it when it becomes part of their reward system.

Caveat on setting new goals for your team; don’t over emphasize the importance of efficiency or power savings over delivering a better product or service to the business. If your team believes that the one thing that will get them noticed is saving energy, they will naturally focus on that at the expense of other activities.

3.       Keeping your eye on the Ball

Regardless of the hype or reality of any specific new focus area, there is no changing the fact that for the vast majority of us, “saving energy” isn’t in our job title. Our company isn’t a “saving energy for you” company, therefore keeping perspective on those things that actually add new value to the business is critical. In other words, be careful what you ask for and be cognizant of where and how your team members are focused on opportunities.  Saving energy or writing efficient code is rarely the primary deliverable for a new application or service; so think requirements and innovation first, and power savings second or third.

4.       Think Sustainability

Most of us think of sustainability as saving a few gallons of gas or recycling cans and bottles. When it comes to data centers there is much, much more to the story.  Sustainability is directly associated with conserving (reducing cost/being greener take your pick) and continuity (as in Business Continuity). When you plan sustainably you are much more likely to build or use solutions that will stand the test of time, while also helping to maintain or improve your businesses corporate image.

Non-Traditional Examples of Data Center Sustainability:

You should build or lease facilities that can handle higher power density per square foot. The fewer buildings you have to build or lease the better, as it’s both green and financially sustainable for your business.

PUE (Power Usage Effectiveness), CUE (Carbon Usage Effectiveness), and WUE (Water Usage Effectiveness) are all great metrics for helping to drive the right behavior. Using the EU Data Center Code of Conduct or The Green Grid Data Center Maturity Model are also great tools to leverage.

Site location has a huge impact as well. Sustainability applies to your ability to maintain the flow of resources required to continue operations. If you build or buy in a place that needs water but might lose access or where there isn’t a good pool of future employees to pick from, you’re running a risk.

5.       Build power savings in to your designs and purchasing processes

As I usually don’t suggest building your own facility, I do highly recommend that you work with partners who have demonstrated with verifiable metrics that they have an excellent track record of energy management and efficiency of use.  If you are going to build, there’s a long litany of considerations that you must make to ensure the entire data center system is designed around efficient use of resource, especially power.  Keep in mind that every resource you use has an energy and or water factor associated with it. When you take that energy and water factor into account you’ll understand the importance of efficiency and its associated impact on sustainability.

Purchasing should also be involved in your strategy to become more of an energy sipper than a guzzler. Purchasing doesn’t always have visibility into the underlying drivers for one product or service selection over another. Helping them understand the value of the entire lifecycle and supply chain impact of each product or service choice will help them more effectively prioritize decisions against factors other than price.

It’s not just about “a” technology

It would be nice if we could just buy cloud or lease data center space and assume our work was done relative to gaining more efficiency and becoming a more responsible corporate citizen, but you can’t.  It takes much more than a focus on a specific service or technology it takes a cradle to grave systems oriented view of the entire IT/Data Center envelope.

Regardless of whether it’s Beef Stew or a Data Center

Whether making a batch of stew or operating a data center, the fact is, you can’t just add things to the recipe or change it without understanding the potential outcome. In the case of beef stew, you actually have considerably more leeway than you do with a data center, but it’s still a problem if you just change ingredient quantities, add things at the wrong time, or leave something critical out. The end result is that the food won’t taste right. At least with stew you only invest $20 and a few hours and then do it over again if it’s not right. It’s not so easy with a data center.

Professionally Copy Edited by Kestine Thiele (@Imkestinamarie)

 

The Biggest Impact on IT Firefighting & Business Agility – Data Centers

08 Apr

Protection & Value

Is it reasonable to assume that if you’re buying a safe for all our valuables that you’d buy the one that is the best combination of security and cost. This combination of security and cost would be driven by your budget and the value (intrinsic or sentimental) of your precious items. I would guess that the same principle of budget vs. value would apply to protecting your IT environments.

So many places to look, so many holes to patch

The normal enterprise IT environment is filled with hundreds of applications. In most cases each of these applications is supported by unique design at the hardware and software level, if not also at the network layer. The fact that there is so much uniqueness about our IT environments means we expend inordinate amounts of time dealing with common problems in 100 unique ways. Maintaining these environments has become the bane of enterprise IT groups. By now, we’ve all heard the story of how keeping the lights on comprises 70-80% of the IT budget leaving only a small amount for much needed innovation.

Keeping the lights on has several meanings, including the mundane but critical “general maintenance and support” of each environment. However, keeping the lights on could also mean avoiding outages. Generically speaking, all of us in IT attempt to build and maintain environments with the highest possible availability (within budget and available resources).  The problem is that we’re often spending too much time fighting fires of “maintenance & support” and not enough time solving the underlying issues that cause many of the fires or in this case cause many of the outages (same as a fire only worse). Where should IT focus its attention relative to avoiding outages and or reducing the number of fires?

If you can’t focus on everything

Few IT organizations have the luxury of being able to throw as much money or bodies at a problem as they’d like. So, if you have to pick which efforts will provide the most “keeping the lights on” value for the dollar, you should pick something that can be fixed for everything, and fixed once. How is that possible? How can I fix something for everything, isn’t that the opposite of focusing on something important and avoiding getting lost in the fire? The simple answer is no.

Data Center as a Platform (DCaaP)

There are only a few services and solutions that affect everything in IT—one of them is the data center and the other is people.  The two areas of people and data center are where the most value can be gained in virtually any IT organization when it comes to reducing risk and the threat of fires. That’s right, just two areas, both of which can be worked on with minimal impact to active production environments and in most cases without too much additional expense.

It’s well documented that humans are almost always the biggest single risk factor to the availability of systems. The more humans need to be involved, the more likely a mistake will get made and a failure will occur. We all talk about hardware failure and power failures, even viruses and software bugs, but if you want to reduce risk, you reduce the human touch factor.  The simple answer is that you need a combination of three things: good leadership, excellent process/automation, and solid training.

When it comes to owning and operating a data center as a system, it begins to get a little more complex. Most organizations fail to treat the data center as a system and are constantly dealing with components or services independent of the DCaaP.  While there are hundreds of discrete components and services that make up a functioning data center, it is no different than how you might work on a car. You don’t talk about replacing the tires on your car without considering whether they will fit on the rims, fit in the wheel well, or cause the handling to change. The same holds true with a data center as there is virtually nothing in a data center that can be changed without having some effect on the performance of the system.

Some of the more well-known discrete components of a data center include power, HVAC, security, water, and environment (I.e., humidity, cleanliness, & temp). Any one of these areas could be cause for fires in IT but all of them together combine for high overhead if they aren’t expertly managed. Oft overlooked is networking/connectivity. Which, while a part of the data center, connectivity is also an underlying service bridging all applications. All of the aforementioned DC services, including connectivity should be considered part of DCaaP. Imagine if instead of buying a bunch of air conditioners, routers, UPS units, PDUs, racks, sensors, ducting, cable, ladder rack, etc., you could instead buy a package? This likely isn’t news to anyone, but that’s what a colocation provider is supposed to offer; Data Center-as-a-Platform. Not all colocation providers are created equal, so just moving from your data center to theirs won’t necessarily solve any problems and in fact could cause new ones. The real opportunity is moving to a colocation provider that can improve your level of service by driving down the risk of fires to zero (0) and increasing your ability to address new opportunities.

Imagine improving customer and employee satisfaction, while giving your employer better tools, simultaneously lowering costs and reducing your carbon footprint.

Remove the issue and focus on business enabling priorities

Generally speaking I don’t believe in washing my hands of a problem, I prefer fixing it. However, building and operating data centers at the highest levels of efficiency, performance and availability isn’t for the faint of heart. As mentioned earlier, the majority of businesses don’t have the organizational alignment that allows for building and running first class data centers. It’s also true that building a data center means locking in a 15 year business plan and CapEx investment that isn’t easily adjusted on the fly. In the modern IT space, building a 15 year business plan and locking in a bunch of CapEx isn’t conducive to agility.

Reduce your overhead and fire risks while improving agility and lowering costs

Find a data center partner that makes it their life’s work to provide their customers the equivalent of an Indy car for agility, a Tesla for efficiency and safety, and an armored car for security in the form of the most efficient data centers with Tier IV Gold availability ratings combined with connectivity options beyond compare. What else can you do in IT that will reduce your overhead and put out many of your fires while also improving agility and lowering costs?  Another way of looking at this opportunity is that you’re helping to “future proof” your investments. When it comes to value, what better way to obtain value than by actually improving your operational capability and agility? Wouldn’t you rather focus on capability and agility first and have efficiency go up while costs go down, all as a side effect?

 

Professionally copy edited by Kestine Thiele (@imkestinamarie)

 

 

Is Shadow IT George Washington or is it Donald Trump?

27 Mar

In U.S. history, it’s fairly well recorded that George Washington neither sought nor wanted the office of president.  He reluctantly served a second term and refused the third.  The nature of Washington is considerably different from the majority of the rest of us, and certainly far different than that of Donald Trump. Trump made it very clear that he’s sticking to his guns, even when the effort to discover President Obama’s foreign birth was proven futile. In fact The Donald misses very few opportunities to let the public know he’s the man.  How does this relate to Shadow IT, read on.

Hasn’t Shadow IT been talked about enough?

The discussion around the good, the bad, and the ugly of SIT has raged for several years. However, I’m hoping that taking a non-traditional look at one of the critical underlying issues traditional IT and SIT commonly face when dealing with each other will give the leadership of each group new food for thought.

If you’re not familiar with the risks and rewards associated with the competition between SIT and the IT organization, I urge you to find some good articles* to give you more background. On the other hand, if you’re familiar and or struggling with the issue right now, then please, read on.  I’m not going to say whether SIT is good or bad, but I will attempt to shed some light on the “human” part of the equation.

I’ve been on both sides of this equation as and feel I understand fairly well why SIT (alternative) organizations are initiated. This blog isn’t an effort to say whether SIT is good or bad, but rather to shine a light on how the relationship with IT and the original goals can go wrong.

The battle between Shadow IT and the IT organization

As with most situations that involve humans, the “human” part of the equation must be taken into account if you wish to have any hope of getting to a satisfactory resolution. If you aren’t considering the human equation when dealing with a human problem, you’re missing the most critical aspect of the issue. It would be like ignoring the fact that you are driving a gasoline powered car and when it runs out of gas you put water in the tank and then yell at it when it won’t start.  What makes a mechanical device or a human tick is crucial to understanding how to deal with it. The easy problems to point at when SIT and IT are bickering are the “symptoms,” not the underlying dynamics that make the symptoms persist.  One could argue that the underlying dynamic is the perceived or real failure of IT to deliver, which then creates a space for SIT to fill. In many cases, you would be correct as to what “appears” to create the situation.  However, I would argue that in most cases it’s not the spark, and in many cases it’s not IT’s on-going delivery problems that keep SIT alive or make it thrive.

Why is Shadow IT often like Mr. Trump and less like George Washington?

When a cause is created the creator has a tie to it that is tough to break (President Obama’s Long Form Birth Certificate). In many cases, humans who have started a cause will continue to pursue the original aims of the cause long after the benefit or need ceases to exist (I’m sure Trump really believed that Mr. Obama wasn’t born in the US). When the need subsided Trump couldn’t let the issue go.  As humans we associate ourselves with our causes; for many of us, the “cause” is the work function we’re responsible for, and this association is what makes us who we are, right or wrong.

As the leader of a SIT group, you are by nature (in the majority of cases) at odds with the status quo (IT).  You’ve made them the enemy and as such, must defeat them through any means possible. Yes, that’s right, I said “any means possible.”  When you need to prove your value in the workplace, you generally have two avenues to pursue: work better and harder than everyone else or make the other guy look bad.  When it comes to dealing with traditional IT, you probably started your SIT group under the assumption that IT already looks bad.  What happens if IT does well? What happens if the real need for SIT seems to be diminishing? When IT starts to do well, your first reaction is likely to be, “I’ll ignore them,” and if they continue to do well, your next action is likely to be the initiation of the passive aggressive (PA) phase of the relationship.  In the early days of the PA phase you are anxious to point out perceived failures of delivery more aggressively; you might even try spreading doubt in the minds of your business leaders about the ability of IT leadership to succeed.  In the case of Washington, he “needed” to do a job that the people demanded of him, but his goal was “solve the problem,” rather than create an empire.  What do most leaders want to do? You guessed it—they want to build an empire. If they feel someone is threatening their empire, do they ask themselves, “I wonder if my opponents arguments are correct or if their services are actually better now,” or do they close the drawbridge and man the battlements?

No easy answers

  • Unfortunately, there’s no easy button for solving the SIT / IT death match, but there are some things that are worth trying.  Mind you, I’ve tried some of the very advice I’m offering, and if you don’t have the right leadership in place above SIT and IT, then it likely won’t work, but it’s the best I’ve got:
  • Develop strong communication channels between the two groups. As the old saying goes, “Keep your friends close and your enemies closer.”
  • Ensure regular dialog between members of executive team above each group. If they’re aware of what each group is doing, they are less likely to make a rash decision about the future of either group.
  • Find a way to develop a partnership. There’s usually enough opportunity (read: hard work) to go around for everyone.  Find projects that both teams can work on together and or give the other group a project that they might have better skills to handle or maybe they have more time and better funding.  In the end, leadership is about getting stuff done. If you’re getting stuff done, no one really cares how it happed or who did the work.
  • Look for opportunities to help the other in a time of need, but ensure everyone knows about the arrangement.
  • Make sure your counterpart is fully aware of the limitations you’re working with. You might be able to develop an ally from an enemy.

Happily ever after?

If you’re lucky, the two groups will learn to trust each other and stick up for each other during debates over issues and or funding.  If you’re really lucky, one of the two leaders will show their true leadership abilities by convincing their counterpart that a merger is the right approach. The only way to succeed is by keeping the human equation in the back of your mind during every debate, argument, or struggle for shared funding.  The reason for your counterpart’s position isn’t important—what’s important is that you deal with it in a way that’s likely to show results.  As evidenced in the news, you can throw rocks and yell all you want, but many leaders would rather lose everything than give up their failed cause.  Take a long look in the mirror and be honest…am I George Washington just trying to do the right thing or am I Donald (a fool for all seasons) Trump?

 

A cherry picked “Rogue IT” blog by @jeffsussna

http://blog.ingineering.it/post/51501409916/why-it-needs-design-thinking

Blog copy edited by Kestine Thiele (ImKestinaMarie). I made some updates after Kestine finished editing, which likely ruined some of her perfectly good work. 

 

Data Center Project Euphoria vs Reality of Ownership

25 Feb

Building a Data Center equals Big Project Euphoria – It’s addictingNew house

The excitement of building or acquiring something new is real, and it’s a thrill. You could be building a new pool in the backyard or a new data center for your company, but the story is the same. The thrill of working on something with an important and well-defined endpoint is palpable, and when you combine that with the narcotic of vendors and contractors worshiping you while you spend money, it’s downright intoxicating. Building or buying a data center is the “easy” part; “owning” it is what’s hard.

To “own” is more than to possess—like “owning” a car—there’s a little more to it:

-          Owning a car means more than buying it. Owning means you must also insure it, clean it, service it, repair it, evaluate it for safety or sustainability, and eventually recycle or retire it.

The Question

I was speaking on a data center panel for Bisnow in San Francisco on Feb 20, and a question from the audience was tossed my way. I don’t remember the exact wording (so I’m paraphrasing), but it was something like this;

“What advice would you give to someone looking to build their own data center?”

Great question with an answer that’s potentially a mile long, but here’s how I responded:

“95% plus of all companies have failed to create the appropriate organization to build, operate, protect, monitor, sustain, and lifecycle a complex system like a data center.”

Then I went on to say:

“I’ve worked with some leading technology companies, and without exception, even those companies that should have known better, couldn’t or wouldn’t accept the fact that data centers deserved a different ownership model.”

I continued:

“Most companies fool themselves into believing they understand and have planned for the ramifications of owning a data center because of project euphoria. The siren song of the big project is just too much for most IT folks to walk away from.”

Temporary high

During the project, everything seems great—cats and dogs (facilities and IT) working together, finance helping out, the CFO chatting with you in the cafeteria, the CEO mentioning the data center at town hall meetings or to investors. Can you feel it? I can feel it, and all I’m doing it writing it down. We haven’t even started talking about the draw of having all these big vendors/partners fawning over you, treating you like you’re a king or queen; your wish is their command.

The problems start six to twelve months after the project’s completion. The glow has begun to fade, day to day responsibilities and priorities come creeping back, and what do you expect happens next? As the glow fades, the teams focus on owning an expensive, complex, and critical facility begins to fade as well. Gone are the cross functional meetings where everyone sang (past tense) Kumbaya, also gone are the pats on the back from the CFO. The CEO has gone back to forgetting your name, and what do you think happens next? You guessed it: focus shifts. Cost savings assumptions are missed, but not captured. Efficiency guarantees are faked or avoided. Operational performance might even start to lag. It’s no one’s fault; it’s basic human nature. When we fail to capture the human equation in projects, leadership, friendship, etc., we will eventually fail at whatever it is we’re doing.

Additional Color from the next panel of experts

John Sheputis, the CEO and Founder of Fortune data centers, was on the panel after mine and after referencing my “euphoria” answer, he went on to add some additional color to better illustrate the original point. John basically said, “The type of work and focus needed to run a data center effectively is very different than running a short-term project. A data center requires day in and day out focus on being perfect and making marginal improvements, while avoiding risk to production operations.”

If it’s not already clear

What I’m trying to say is that owning a data center is a huge responsibility, and the bottom line is that few organizations are designed, measured or rewarded appropriately to also run the data center effectively. So think long and hard about why you really need an internal data center. If after thinking it through you still believe that owning is better than renting/leasing, then by all means build a data center. However, before you do, be sure to get corporate financial and organizational support lined up and guaranteed so that you can continue to own the facility effectively for 15 years after its completion.

 

Cloud Management – It’s Not Just A Job

21 Jan

I was having a conversation with the CEO of a software company recently, and we got on the topic of cloud management.  Of course, there are a number of ways this conversation could go, but we were largely focused on the Not just a jobadoption of specific cloud management platforms like Eucalyptus, OpenStack, and CloudStack.  As the conversation drifted towards which product is suited for which business type, a light bulb went off in my little brain. During that momentary and fairly dim flash of light I had the realization that maybe the whole world hasn’t even accepted that a cloud management platform is necessary or important.  Blasphemy!

Buyer type vs. demand and market adoption

I don’t have the actual numbers, but I’m hypothesizing that fewer than 30% of businesses have accepted or internalized the idea that they will need a mature and supported cloud management platform (CMP).  While 30% is a big percentage, it isn’t the majority.  I would also be willing to bet that the majority of the 30% are companies that tend to be forward thinking and developer oriented. If my assumptions are correct, that leaves

70% of companies not knowing why they need CMP or if they need it. These companies also have no plan or at least no effective plan for prioritizing, reviewing and then selecting a CMP solution.  The common characteristic among many of these companies will likely be the difficulty in justifying the investment in making a solution fit their needs, when their growth and pace of change don’t demand it. In other words, ROI is difficult to capture.

When we talk about companies adopting one of the aforementioned CMPs we commonly refer to the name brands of Paypal (Openstack), Zynga (Cloudstack), or Sony (Eucalyptus).  Each of those three company names helps to increase buyer confidence because they can see that forward thinking enterprises are adopting. However, what these company names don’t cover is to what extent the CMP is being used, and whether or not a 1/10th scale need would still provide ROI to the buyer. There’s a gap in industry messaging and buyer understanding among the three players discussed here. That gap relates to why a CMP isn’t just another HP Openview or IBM Tivoli. In the past the messaging around a product like Tivoli was, “We can solve all problems relative to monitoring, and reporting on your infrastructure and to some extent your applications.” While both of these legacy infrastructure tools were great, the vast majority of companies (I’d venture about 70%+) couldn’t justify the expense of the product, work effort to install, and on-going support. So what did they do instead, they got 80% of the benefit from 15% of the cost and overhead by installing something like What’s Up Gold or Solarwinds.

What’s needed?

There needs to be better education available on what the customer can “really” do at their specific scale at this specific time in their history.  Another way to address the issue would be to answer this simple question for the customer, “Where and when will I fail if I’m not using a CMP?.”  Even with proper education I don’t think the 30% number will change that much for buyers of either OpenStack or CloudStack. However, the opportunity for a product like Eucalyptus to bite into the 70% is quite real.  I believe the opportunity is real for the simple reason that most companies have VMware, and if they’re on the smaller side are likely to be looking to Amazon for some of their cloud needs.

I could add even more complexity to this discussion by talking about where solutions like ServiceMesh (CSC) and Entratius (Dell) fit on top of this equation. Instead, I’ll just point to a past blog on the strategic rather than tactical nature of your cloud management choices.

I’m a believer

I strongly believe that a well-managed cloud environment is critical to success at many levels including ROI, risk mitigation, security, ease of deployment, and vendor choice, etc.  However, I also believe that there isn’t a one size fits all cloud management solution and today the big three to select from are OpenStack, CloudStack, and Eucalyptus. While OpenStack and CloudStack fit neatly in very similar territory (SPs, Large enterprise, web scale environments), Eucalyptus fits a little more neatly into the everyday buyer category. The trick for Eucalyptus at this point will be determining the best way to represent a complex, but easily consumable solution to a group who believes they only have simple needs.

 

Professionally copy edited by Kestine Thiele (Imkestinamarie)

 

Thiele’s Blogs from 2013 – Most read, re-tweeted, argued about & commented on!

08 Jan

Rearview mirrorOf my 40 plus total blogs from 2013, the following from www.switchlv.com/switchscribe & datacenterpulse.org made the most noise when they were published. In fact, several of them received well over 4000 reads. Since many of these posts are still relevant, I thought I’d share again for those who missed them the first time through.

Blogs from: www.switchscribe.com Nine Selected

February:

Don’t buy, build or lease space in a datacenter with wood in the roof or ceiling!

There was considerable debate about this as many data centers in use today are not purpose built and therefore struggle with existing structural gaps.

March:

Not Your Daddies Data Center

My thoughts on the new found importance of the data center, in combination with characteristics that will make the data center successful. This was my most read Data Center specific blog on Switchscribe with over 5000 reads.

May:

Traditional Measures of Data Center Performance are inadequate

Many of us don’t apply good measures to begin with, but even worse is that the measures we do have are largely inadequate.

June:

The Server Closet Must Die!

One of the most hotly debated blogs of 2013

July:

Data Centers as Global Growth Enablers

I see the modern data center as infrastructure akin to the transcontinental railroad, manufacturing, communications, etc. Many still see it as a place to put servers.

August:

Top 12 Data Center Trends thru 2015

Self-explanatory!

 

September:

I can see clearly now the clouds are gone

My first and only blog with a song as a theme!

October:

OpenStack – The future of Infrastructure Management?

This is another of the more contested blogs. In fact I’ve probably had more continuing and follow up questions and discussions on this blog than any other. I plan to write a follow up on the topic in coming weeks, as I believe there is opportunity to better define the space that a cloud management platform resides in and who the likely user will be.

December:

Public Cloud vs. Private which is Better?

It’s not really about which is better in the abstract, but rather which is better where & when. In retrospect I wish I had been more specific about how hybrid cloud will play an important role here.

Blogs from: Datacenterpulse.org Four Selected

January:

No Man is an Island and Neither is your Data Center

A discussion on the importance of location for critical infrastructure

June:

The Pain and Risks of Ignored IT Infrastructure

This blog received almost 4000 reads, and was discussed at length on Twitter and LinkedIn. Apparently, I hit an open nerve with many readers.

July:

Keeping IT Relevant isn’t About the Title of the CIO

This blog received almost 4500 reads. It’s really just my take on substance vs. naming for the role (current and future) of the CIO.

October:

Enterprise Legacy Environment Cloud Adoption vs. Netflix

Another of what I like to refer to as “common sense” oriented blogs on how and why certain organizations will have different cloud adoption strategies.

 

Public vs Private Cloud – which is better

17 Dec

Cloud Image Success and FailureCloud Adoption Trends – Is Private Right?

There’s an amazing amount of teeth gnashing around private vs. public cloud these days, much like in previous days. In this case though I’m not going to even entertain the discussion on whether private cloud is real, rather I’m going to talk about how different IT organizations might approach a cloud decision considering their own unique variables.

There isn’t a one size fits all answer

Nope, cost isn’t the answer, technology architecture isn’t the answer, and security isn’t the answer either. In fact, no one answer is correct, but in some cases all of them are. The first priority for the business is running a successful enterprise. The first priority for IT is to provide solutions that help the enterprise achieve that success however the business chooses to measure it. In other words, the right answer is the one that best enables the success of business objectives. The right answer could be private, public, mainframe or all of the above, and that answer will be dynamic, just as modern business is.

Common Private vs. Public Cloud Decision Themes

There are a wide range of requirements that the industry and those of us who write about cloud have used to try and convince you are always best. I’m here to say that there is no “always” when it comes to picking IT solutions and cloud is no exception.

Public Cloud Themes

Private Cloud Themes

Massive scale requirements Control
Cost Security
Staffing Cost
Geo-Distribution requirements Support structure
Speed of access & delivery Compliance
Options Steady consistent business growth
Ecosystem of partners Agility
Support structure Well understood usage
Agility Ecosystem of partners
Elasticity at scale Speed of access & delivery
Compliance requirements Compliance requirements
Rapid Business Growth Legacy Applications & Infrastructure

 

Weird huh

No, your eyes aren’t deceiving you what you’re seeing in the above table is in fact true, many of the supposed “drivers” for selecting public vs. private (in the right circumstances) apply to both options. The magic is in where and how each of the above variables applies and what importance they should be given.

Each company must make its own value judgments

Cost is still often identified as a key benefit of cloud adoption, as is massive scale, geo-diversity, and elasticity at scale. The issue with any or all of the aforementioned cloud “qualities” is that they only matter under the right set of circumstances. There is incredible complexity associated with defining the best option for each company at the time they’re making the choice, and as such, the following three scenarios have been simplified. I’m sure that if I attempted to cover every scenario I would need at least 100 different models.

Scenario 1: 20 plus year old mid-size enterprise (Not internet driven) 

Legacy 1000 plus legacy applications Many won’t be moved to cloud (any cloud) for 5 years or more
Staffing Small IT team with limited developer skills Likely to want packaged solutions that allow them to gain rapid benefit, even with a cost premium
Well Understood Usage Applications are well understood in the usage characteristics (limited short term scale needs) Having limited elasticity requirements means the “massive” scale of public cloud doesn’t provide additional value
Agility in moderation Agility is desired for competitive advantage Rapid delivery is important but not measured the same way as an internet driven business (Days vs. months is OK).
Steady moderate growth Business growth of 7% or less CAGR Growth implies fewer surprises for IT requirements at scale
No real geo distribution needs Limited geographic diversity requirements of apps Have a few locations but primary app usage is at HQ

 

The likely answer for Scenario 1 is private cloud, with public cloud used for a few applications and some development. The fact that the majority of workloads are well understood and don’t experience significant usage spikes means that a slightly over provisioned private cloud environment is likely more cost effective in the long run. The limited size and experience of the team also means they would most likely benefit from a packaged cloud solution (converged infra with a CMP). As speed isn’t the primary requirement and there are only a few key office locations the need for a widely distributed public cloud based application set is diminished as well. The focus for new applications should be on SaaS wherever possible.

Scenario 2: 7 year old internet facing business

Legacy Apps Limited set of legacy applications Active projects underway to retire all of them as opportunity provides
Staffing Good sized IT team focused on enabling an internet driven business model Focused on solutions that can scale and scale at a manageable cost.
Elasticity, Geo-Diversity, Support structure Primary applications are internet facing and each of them can vary wildly in use depending on product launches and seasonal buying patterns Elasticity at scale is critical, along with an ability to rapidly deliver updates.
Agility Agility is desired for competitive advantage Agility is measured in hours vs. days and applies to the entire company
Rapid growth, geo-distributed, Internet oriented Business growth of 15% or more CAGR Growth can be volatile and difficult to plan for
Geo-distribution Widely distributed work force with developers and contributors in offices all over the world. Also, a customer base that is globally distributed and dynamic. Geo-Diversity is critical for application, performance and fault tolerance

 

In scenario 2 the primary usage characteristics for the company (global distribution, speed to market, many locations for customers and staff/engineering) suggests that the focus should be put on utilizing public cloud for most solutions. If there are internal focused applications that are fairly steady in their use, then the addition of hybrid and or private cloud could make sense.

Scenario 3: 20 plus year old large financial Institution

Legacy Applications 1000s of custom built applications, some delivering millions in revenue Some projects to retire or move applications to stateless environments. Heavy focus on building new apps for cloud. Many existing apps impossible to move to public cloud
Staffing Large IT team focused on enabling large scale cost effective performance oriented infrastructure Solutions that scale, potentially involve staff investment in open source (I.e., OpenStack/CloudStack/RedHat/Automation, etc)
Elasticity, Geo-Diversity, Support structure Extreme elasticity applies to a few key apps for trading, Monte Carlo, big data analytics etc. In some cases this could be handled by public cloud/grid infrastructure. Some applications will be better suited in hybrid/private cloud
Agility Agility is a driver like most businesses, but is not the sole reason for cloud operations. Agility is measured in days vs. months
Steady growth, geo-distributed, heavy IT investment Business growth of approximately 12% CAGR Growth is fairly well understood
Geo-distribution Widely distributed work force with developers and contributors in offices all over the world. Also, a customer base that is globally distributed and dynamic. Geo-Diversity is critical for application, performance and fault tolerance
Compliance Heavy regulatory and compliance based risks Require contractual guarantee with providers or internal solutions

 

With scenario 3 the environment is fairly complex and doesn’t fit into a single solution. The fact that they have a strong IT team and fairly large internal set of applications means private cloud is a real option for them. They are also likely in a position to be able to develop unique private cloud environments instead of focusing on pre-packaged converged infrastructure. However, public cloud is potentially an ideal solution for some of the customer focused applications and or applications requiring elasticity at scale. An open item is compliance; using public cloud would depend on the provider’s credentials along with usage trends for any given application.

It Depends

As you can see from the definitions of each company’s unique environment and how those unique needs help define the priorities and strategy for technology adoption, there isn’t a one size fits all option for cloud. My belief is that for the next 5-10 years we’re likely to see the majority of companies with over 200 employees using a hybrid set of cloud based solutions, which include private, public, hybrid and SaaS. Farther down the road, who knows, maybe we’ll get to the magical low cost commodity cloud that will suit all.

Use the scenarios

Using the scenarios provided as a model, you should be able to ascertain some of the critical decision factors in making a cloud choice for your organization. Having a firm grip on what your teams are capable of, in combination with what a specific solution requires will help you to better position IT as a partner instead of a roadblock.