Data Center Trends – Colocation & Sustainability; Can They Coexist?

28 Sep

The data center industry has made some big moves over the last two years to demonstrate a desire to create more energy sustainable operations.  Apple, Facebook, Google, ebay, Microsoft,   Amazon  and others have each made moves that will make a real dent in their companies carbon emissions.  The choices for going green are myriad and are demonstrated by the diversity of options selected by the aforementioned companies.  However, it seems my industry (colocation) has been slower to pick up the sustainability baton.

Colocation Providers

Many colocation companies have been slow to reduce their PUE (Power Usage Effectiveness), and they’ve been even slower to make any serious moves into the sustainable/renewable energy market.  While there are a few notable exceptions, I struggled to find more than a couple handfuls of the approximately 1086 colocation companies from around the globe that had made any serious effort to use more renewable energy.  As of December 2014 there were 3685 individual data centers in the colocation space.  Assuming an average data center size of 5 MWs there are over 18,000 MWs of energy being consumed per hour by these 3685 data centers.  The carbon emission estimate- assuming 70% or more of the 18,000 MWs are generated through coal- would be *79,000,000 tons of CO2 every year.

Efficiency vs. Sustainable

Efficiency is a great place to start and there are still thousands of colocation centers that have PUE values of over 2.0 (depending on regional climate issues a good PUE should be 1.2 – 1.45). This means on average colocation facilities are burning 30% plus more energy than they should be to support their hosted IT equipment.  This inefficiency is bad enough, but when you combine it with the dirty energy mix that most of these 3685 data centers run on the story only gets worse. Consider that a data center running at a PUE of 2.0 on natural gas produces less carbon output than a 1.4 PUE data center running on coal generated energy.  Combining a high PUE with a dirty energy supply just exacerbates the situation.

Related blog: Sustainability Applies to Data Center Facility Design

Roadblocks Real & Assumed to Positive Change

While I can’t and won’t attempt to speak for every colocation provider, I can discuss some of the things I’ve witnessed during my time in the industry.  Many colocation providers act as if they are merely recipients of customer workloads, unwilling and seemingly unable to make changes that the customers aren’t demanding.  One simple example of this unwillingness to change is the fact that most data centers are still being built with raised floor, which is being used as the cold air plenum.  Cold air falls, hot air rises, why would you push cold air up? Another issue is the lack of scale that many of the providers have in any one location.  This lack of scale means any individual location or small provider would struggle to force real change on how energy is sourced in their area.  It’s also true that a large number of existing colocation facilities still don’t isolate their hot and cold spaces, which means they are mixing the air and causing unnecessary inefficiencies.

Switch SUPERNAP Makes a Bold Move on Renewable Energy

The SUPERNAP data centers have been known for their efficiency for some time (annualized PUE of 1.18). The data centers historically have been more sustainable because of the cleaner energy mix provided off the grid in Nevada and especially in Las Vegas where we’ve been based.  The latest Switch SUPERNAP sustainability news though has made me even prouder to be a part of this organization.  With efforts underway to build 100 MW’s of solar in Nevada starting in October 2015 our stated goal is to be G100 (100% on Renewable energy). We became the first Nevada company and first data center in the country to join President Obama’s climate pledge and we’re also working with elected leaders in northern Nevada to use 100% recycled effluent in Reno and partnering for new water technologies in Las Vegas and elsewhere.

The Opportunity to Make Real Change is Here

Our industry needs to come to grips with the combined impact of so many data centers on our carbon emissions and water use. In some geos we still see 100% of energy supplied by coal and in other areas companies are running inefficient facilities from a water perspective where there are on-going droughts.  I see tremendous growth in the data center industry continuing for some time to come and Data Center Dynamics estimates a CAGR of 9.3%.

Related Blog on Growth: Data Center Trends – Have you Fallen behind Already?

Change is needed and there’s no better time than now. I call on my fellow colocation industry folks to accept the challenge to reduce energy use through improved efficiencies and increase the adoption of cleaner and or renewable energy.



Supporting Data:

Support for the 79,000,000 tons of CO2 come from the measures on this page compared against 18,000 MW ours from over 3600 data centers with average coal use of 70%

Comments Off on Data Center Trends – Colocation & Sustainability; Can They Coexist?

Data Center: Who Knew Saving Cost & the Planet Could be so Easy

01 Sep

How can a 1 MW of IT load data center kill a company’s financials while ruining their reputation and seriously but unnecessarily pollute the planet?

Related Blog: Switch Becomes first data center provider to join White House American Business Act on Climate Pledge

The average data center Power Usage Effectiveness (PUE) rating across the industry is around 1.7*.  While the best data centers in the industry are running at under 1.2, the vast majority of enterprise (internal) facilities are running at a PUE of between 1.7 and 2.5.  At a PUE of 2.5 the operator is using more than 2X the power needed for the same workload that a data center running a PUE of 1.2 is; consider that math across multiple megawatts of power use.

High Cost of a high PUE

There are at least three major areas of cost and risk associated with running an unnecessarily high PUE:

  1. Wasted power: 1 MW per year at .08 cents per kWh equals $700K per year

With a 2 MW facility at a PUE of 2.0 you’re wasting $560K per annum versus a 1.2 PUE data center with the same IT load (80% of $700K = $560K).

  1. Wasted power and cooling equipment: the gear to support 1 MW is roughly $3 million (assuming generator and UPS). Costs potentially increase if you have a higher “TIER” rating.

If you’re running at a PUE of 2.0 on a 1 MW IT load environment, you’re spending roughly $1.6 million extra in capital in a TIER II type design.

  1. Total Cost of Ownership (TCO): The extra power delivered and the extra power equipment come with overhead and replacement costs. On-going support of UPS, transformers, PDUs and generators combined with any special costs associated with substation construction etc.


All of the above costs can easily add up to more than $1.2 million a year in wasted energy and maintenance bills. CapEx losses would depend on depreciation cycles but would average approximately 300 thousand per year. Total wasted spend $1.5 million per year for a 1 MW of IT load data center. With each additional MW of IT load you’ll lose another $1.5 million per year.

Related Sustainability Blog: To meet accelerating DC growth server cabinet density must increase

Cost number 4 gets its own section

Besides the more obvious costs as listed above there is the financial, PR, and environmental loss associated with your businesses carbon emissions. It’s difficult to put a specific value on bad press, but generally speaking unless you’re in show business, bad press is bad. Assuming your data center gets its energy from coal fired power plants you would be generating roughly 110 million pounds of carbon dioxide for every megawatt year.  In the example above where you have just 1 MW of IT load, but are running at a PUE of 2.0 on coal it equates to creating roughly 90 million pounds of unnecessary CO2 emissions every year.

What can you do?

The tasks associated with getting your PUE down to 1.2 from 2.0 or higher and moving to a new energy blend are both difficult, potentially costly and time consuming. There are dozens of blogs that have been written on the subject of PUE improvement and or energy saving tips. Your ability to succeed at making a change to either your PUE or your energy blend will depend on many variables, including; size of your business, the country, state, county or city you’re in, the age of the DC, and the capability of the facility. Consider your investment in improving an existing facility against the real world possibility that you have any more than 2-3 years of useful life left when you consider business and or technology change.

Related Blog:  Five Reasons Why Server Cabinet Power Density is Going Up!

There are no cost free options for making the required changes to reducing your PUE, lowering your costs and improving your Corporate Sustainability Report (CSR). However, there are efficient and long term options for giving your company an immediate and positive shot in the arm.  The simple answer is; move your IT gear to a data center where they’ve already solved the problem of PUE and clean energy.

Related Blog: Consider Connectivity Compulsory (Another area of major savings opportunities at SUPERNAP)

There are other sustainable and low PUE colocation facilities out there, but there aren’t any in the world that can provide the capabilities and performance that Switch SUPERNAP can.  With industry leading PUE numbers and a continued push to 100% green energy there are few things you could do for your company to reduce overhead and lower emissions with the same positive immediate impact of moving in to a SUPERNAP.

Supporting & or Related Blogs:

Data Center: Have you Fallen Behind Already?

Data Center Efficiency, We’re Done Here

How much carbon dioxide is produced when different fuels are burned

Think Strategically about Data Centers

Uptime Institute Survey of Industry wide PUE

US Energy Information Administration

Managing Your PUE

Comments Off on Data Center: Who Knew Saving Cost & the Planet Could be so Easy

IT Is More Relevant Than Ever – At Least It Can Be

03 Aug

I just wrapped up a #CIOitk (In The Know) panel with Bob Egan (@bobegan), Stuart Appley (@sappley) and Tim Crawford (@tcrawford) on the topic of whether IT is moving towards irrelevance. We covered impacts, conditions and potential strategies in our wide ranging discussion but the common agreement was that the role of IT in the business is more necessary than ever. The real issue is whether the IT org you’re in has a future, not whether IT itself has a future. This opinion that IT remains as relevant as ever was supported by most of the folks following the conversation on Twitter. I highly suggest poking through the tweets of the cast and audience using the hashtag #cioitk as there were some really bright folks contributing.

An organizational perspective

Where do we target change in the organization? The first point that was made by several of us on the panel was that the target for change isn’t just the office of the CIO.  So, if it’s not all the CIO’s fault, who else should we be targeting?  Where does the buck stop; that’s right, the CEO and his/her C-Suite partners.  You hire for what you think you know needs to be done. If the CEO/CFO are hiring someone to keep the lights on or manage IT costs then you’re going to get a manager not an innovator.  How would you hire someone if the requirement was to take your money and make something big happen with it? How would you hire a CEO for a new startup? These are the type of positioning questions that we need to be asking when we’re scouring the land for our next CIO.

Related blog: (Misconceptions of the CIO role)

Cost & Risk Management vs Faster and Innovative

There’s no doubt that cost and risk are areas of concern for a CIO, but truthfully they are for every senior leader, but would you put risk or cost management as your top criteria for hiring a new CMO? IT has the keys to making your business go faster and to applying innovation across all functional areas. IT is by its very nature innovative. One could argue that many of the most successful modern companies are nothing more than giant IT organizations that are constantly innovating new products and services (Facebook, ebay, Microsoft, Intuit, Google, Amazon, etc., etc.). Why would you take your most important weapon for differentiation (IT) and hobble it with a “make sure you don’t break anything or cost too much” mantra? Every company today should be looking for speed, agility, and innovation; IT can and should be helping in all those areas.

Related blog: (It’s about Role Change Not Role Elimination)

Shake it up!

Tear apart every job description you’ve ever created or purchased for the role of CIO and build one like you’re starting a company and you want someone to lead it and be a visionary who will be constantly innovating its products and services in support of the rest of the company.  If you really need someone to manage risk and cost, hire that person but don’t assume they can or should also be a CIO.

Shout out to @amyhermes for her help with graphics, content and outreach for this panel! 

Related blog: (Innovation vs. Cost Center)


Data Center Trends – 5 Reasons Why Server Cabinet Power Density is Going UP!

27 Jul
Switch High Density Racks

Server Cabinets in the SUPERNAP with 28kW delivered

Ready or not (and most aren’t), power density in the rack is going up, and not incrementally over ten years, but dramatically over three to five years.  Can your internal data center(s) support that? Can your partners support it? My rough estimate tells me that if an average of 10kW per rack was required, fewer than 10% of data centers in operation today could handle it.

Why the Screaming?

There are a confluence of events occurring that are driving infrastructure design towards more density, and I don’t see anything reversing that trend anytime soon.

Reason 1: Converged infrastructure (more gear in a smaller space)

  • Right now, converged infrastructure sales are going through the roof. Whether you’re buying from VCE, HP, Dell, NetApp, Nutanix or SimpliVity, the story is the same…more power Scotty!
  • UCS Chassis – 2.0 kW – 6 chassis per cabinet = 12kW
  • HP Matrix Chassis – 6.0 kW – 4 chassis per cabinet = 24kW
  • Neither of the above examples are configured at max potential power consumption.

Related blog “Future Proofing”;

Reason 2: Open Compute and or HPC Oriented Infrastructure designs

  • Modern infrastructure is taking on more of the characteristics of High Performance Compute designs. There are a number of reasons for this change towards HPC, but one of the main drivers is the increasing use of big data and reliance on big data as part of critical infrastructure/application environments. Simply put, there are real performance gains associated with putting more compute in smaller spaces right up next to your disk farms.

Reason 3: Cost & Space – Sustainability

  • I’ve always liked density because I’m an efficiency guy. There are some obvious benefits to having your gear packed in to smaller spaces: better overall performance (efficiency) and less floor space, cabling, and racks (efficiency and cost). There’s also the long term sustainability (blog) There are estimates being made today that suggest cloud and IoT will drive our global IT power consumption footprint from 3% to as much as 20% of total power generated. If we continue to populate the world with low density data centers you’ll soon be able to walk on top of them all the way across the country.

Reason 4: Containers

  • A key push behind the use of containers after ease of deployment, etc. is efficient use of infrastructure. A big part of this efficient use of infrastructure comes from a much higher average utilization of the CPU. This higher utilization will inevitably result in greater power use per rack unit.

Reason 5: Moore’s Law slow down

  • It might seem weird to think that a slowdown in Moore’s law would equate to higher power density, but I believe it does for at least two reasons. Disk and Networking continue to advance at Moore’s law or faster. This advance will mean: 1) More CPUs/Servers will be needed per square foot to support bigger and faster disks and fatter, speedier networks, 2) we’re less likely to see that same decreases in power draw on the CPU as a result of the reduced iterations of x86 chips.

Related blog “Not Your Daddy’s Data Center”:

What am I getting at?

I expect that over the next three to five years many data center owners will discover that their facilities won’t support what their IT staffs are demanding; and even worse, many of the providers you speak to won’t be able to support your need either. Here at the SUPERNAP, we’re seeing the demand for density increase with a large percentage of our customers. We’re already doing thousands of racks at over 15kW delivered and many that are much denser with the trend accelerating.

Professionally Copy Edited by @Imkestinamarie (Kestine Thiele)


Cloud Native Computing Foundation – My Short Take

21 Jul

If you haven’t already seen the Cloud Native Computing Foundation (CNCF) press release from today, you should definitely take a quick look.  I see this as a big step in the right direction for open-source and cloud in general.

What’s different about the CNCF Movement

Application containers and cloud designed applications are two of the hottest areas in technology today and there’s good reason. Application containers are likely to be to modern infrastructure what vmware is to legacy infrastructure, only in half the time.  The move to develop standards and create a foundation around containers are actually already ahead of where we are with virtual machines.  Foundations like CNCF and standards bodies bode well for containers and developers to be in five years where it took virtualization ten.  In fact, in some areas of the ecosystem I’m willing to bet that three years from now the management and utilization of application containers will be ahead of where virtualization was after ten years.

100 Containers for every CPU

The future looks bright for containers and cloud designed applications and frankly it’s just in time. In my recent blog “Assume The Way You’re Doing IT Today Won’t Scale?” I talk about the drivers that require IT teams to reconsider the timing relative to their being ready to live DevOps and design infrastructure and applications with agility. The tools in the supporting and rapidly growing ecosystem (Kubernetes, Docker, COREOS, RedHat, Google, Mesosphere, Intel, SUPERNAP, Joyent and more) will help position IT teams to leverage application containers for critical production environments within a few short years.

Those enterprise groups that want to continue to own all or part of their infrastructure will need to manage against the demands of running infrastructure at much higher utilization along with dramatically decreased deployment times.  The vast majority of highly virtualized environments today are running at below 40% CPU/System utilization. In application container environments we’re likely to see the utilization spike to more like 75 or 80%. Higher utilization likely means more demands on I/O, memory, and network. The ability to effectively manage more “calls” from each CPU to the infrastructure will be a key in broad enterprise ready deployment of application containers.

Containers aren’t going away

I feel it’s pretty safe to say that application containers are not a passing fad. There are still many hurdles before they become production in the way a complex enterprise needs them to be, but the right “Foundation” and standards are being put in place to help ensure success.  It will be interesting to see the progression of this old/new space and to see how existing highly invested cloud providers adapt to make the most of the new capabilities to leverage resources more effectively and efficiently than ever.

Comments Off on Cloud Native Computing Foundation – My Short Take

Data Center Trends – Have you fallen behind already?

07 Jul

In my recent blog (Assume the way you’re doing IT today won’t scale), I talked at length about the two macro drivers that I see having a significant impact on the IT infrastructure we build, buy, or lease. These macro drivers for infrastructure & applications will likely manifest themselves through one or more of the following changes:

Change: More of your business will be outwardly facing

Today the average enterprise has roughly 5-10% of its applications facing externally. The other 90% plus of applications provide service to the internal employees and or some number of contractors and suppliers. The usage characteristics of the 90% applications are generally well understood and mostly fall within a demand variance of plus or minus 20% on a monthly basis from an elasticity point of view. Many of the applications support global partners and employees; additionally, the core customer group is fixed and is usually housed close (under 50 ms latency) from the applications.

For the sake of simplicity let’s call the application sets:

  • “the 90%”
  • “the 10%”


The aforementioned application breakdown information isn’t new, and many IT folks already understand it to be true. So why did I think it was important to call out? Keep reading.

I expect that almost every organization will see a dramatic change in the distribution of applications between the 90% and the 10% categories. Here’s why:

  • Virtually every company will be finding a way to leverage IoT. The use of IoT will generate new business to business opportunities at a scale that most of us can’t even begin to understand today.
  • Cloud based applications will continue to drive more features and functions out to the customer. That’s right; you’ll have people using your applications from the myriad devices in their homes and environments that they interact with on a daily basis. More of your marketing, product development, and customer acquisition/relationship strategies will come directly from interaction with your customers. Many companies are already using tools to get closer to their customers but expect this trend to accelerate.


The above changes to the 90/10 distribution will likely have a profound effect on how each of the applications are designed, provisioned, and supported. It’s possible that the total number of applications used by internal staff won’t change significantly. However, you should expect that the total number of applications in your portfolio will increase and that your distribution will start moving towards a 70/30 breakdown.  This new trend means:

  • The use characteristics of 30% of your applications will be driven by the vagaries of a very large customer population, which means geo distribution and elasticity likely become more critical.
  • A larger portion of the 70% will likely need to take on the characteristics of cloud designs as the lines between internal employees, partners, suppliers, and customers blur even more than they already have.


Change: Increased use of IT for competitive advantage

The competitive advantage movement should not be underestimated. I fully expect that within 3 to 5 years, all but a few businesses and organizations will have come to realize that leveraging IT more effectively is their key to success, and for that matter, their very survival.

The increased use of IT for competitive advantage is the typical “can’t put the genie back in the bottle” scenario. As more companies learn to forego their legacy shackles and embrace big data, IoT, and cloud based infrastructure to increase innovation, drive customer connections, and leverage speed, everyone will want and, in fact, need to be part of the change.

What does this all mean?

We need to start thinking of our IT organizations more like a combination of a racing team and a massive logistics center. The racing team delivers speed, services and innovation, the logistics center ensures you can identify, obtain, and put the best resources available to use in real time anywhere in the world. You can’t enable the type of IT team I described above by sprucing up the company data center and or hiring a few more sys admins and network engineers.  You definitely can’t enable the right IT strategy by attempting to extend the legacy applications and strategies of the past.

Related blog: Why we Value IT Incorrectly – Innovation vs. Cost

Think services, supply chain, ease of acquisition, and customer impact potential. Buying IT solutions for cost or because they fit with your legacy suppliers and technologies just isn’t going to cut it. Consider your data center(s) as an example. The data center strategy & associated partners will become a future proofing resource that allows you to adopt or distribute your IT workloads anywhere, with any design at any scale. Leveraging an ecosystem (blog) type supplier that provides broad access to key solutions and services while enabling a future proofed platform for growth is imperative.

Professionally copy edited by @imkestinamarie (Kestine Thiele)

Comments Off on Data Center Trends – Have you fallen behind already?

The CIO Discussion – It’s about role change not role elimination

29 Jun

We’ve all heard, “IT the department of No,” or we’ve read Dilbert comics that joke about the problems with IT departments and specifically the CIO.  There are also the ever fashionable comments like, “the role of CIO will be moved to the head of Marketing or the CMO position.”  We can all buy cloud or SaaS applications with a credit card, so why do we still need a CIO to get in the way of progress?

Are we really that stupid short-sighted?

Why would it be short-sighted to get rid of the CIO? Let’s take a short trip back to 1997 and take a look at the average department. Let’s see what was going on relative to IT to get an explanation. What do you see (besides a bunch of frustrated users with terrible applications)? In 1997, we were still very early in the days of the internet relative to it being a commonly used resource for companies beyond being a “home” page. However, most departments saw some potential in having a website, and as such, they were paying external contractors 10K plus to build them a page. It didn’t matter what the page would be used for, we just had to have one. There was also the rampant use of solutions like MS Access. There would be a key person in the marketing or sales department that would put MS Access on their PC and build a data base. The web pages and Access DBs and other one off applications were proliferating like wildfire, and there was little if any governance on cost, security, data sharing, application integration, etc. The lack of governance meant that with many of the above mentioned solutions, they became one or all of the following; a support nightmare, a critical resource that now has to be connected to the ERP system, or a failure that just ended up being a waste of resources in cash and time.

Fast forward 18 years to 2015. What’s different? The applications have gotten a little better, and most web work can be done very cheaply. The work is usually managed centrally for a company or large line of business; however, now we have SaaS applications that are being designed, managed, secured, and delivered from outside the corporate network. You have the ability to acquire cloud resources in minutes with a credit card where you can then install an application and or store data. Now tell me, what’s the real difference between 1997 and 2015? There really isn’t a different, except that the person with the credit card is much more dangerous to the organization than they were in 1997 because they need less IT understanding to get started.  In some cases, we can call these acquisitions shadow IT; in others, they are misguided if not well-intentioned efforts to move more quickly than the CIO’s budget or his/her attitude will allow.

Why the CIO position is as relevant as ever

Without governance, the level of risk that enterprises carry when everyone is buying their own IT solutions is very high.  Each time an employee buys access to a SaaS application, they are committing the company and the company’s secrets. Each time another person buys a SaaS application that does the same thing but from another vendor, you have missed opportunity. You also have the risk of wasted licensing, little to no data integration, and one or more of these applications going from useful to critical without the necessary protections, planning, and support. The risk with buying IaaS and building applications outside the purview of the IT group is just as bad as with SaaS. What happens when Corporate data is shipped offsite without appropriate governance? Who knows the data is outside the corporate network, and who will manage its lifecycle, sovereignty, security, and recovery? What about a person with the experience to validate the qualifications of a supplier? These risks and many more are risks that need to be mitigated. Are we to assume that the CMO will have knowledge or interest? The only position really qualified to attempt to support the company as a whole in this new shoot from the hip world of IT is a strong CIO position.

However, a strong CIO may not be enough (related blog post: Why we Value IT Incorrectly )

Even with a strong CIO, there will be issues if there isn’t broad support from the c-Suite. It’s also critical that the CIO be innovation-oriented and secure in accepting that not all IT oriented solutions need to come directly from his/her team.

There is no comprehensive management suite available today that I know of that would allow a CIO to easily absorb new solutions acquired from non-IT groups. Personally, I don’t know what the right type of tool would be. It would seem that a great tool might be a beautiful control panel that would allow you to just drag and drop new applications and sort through the assorted data sharing, policy, and security issues. However, as an IT person who has historically shied away from the “one tool fixes all problems approach,” I have my concerns about the aforementioned. Maybe it’s more of a logical recording of what’s going on. I don’t know what an application would look like that supports this “logical recording” strategy; it’s just a thought.

In the end, it always boils down to you can’t manage what you can’t measure.” I actually see the space of IT governance as a real disruption opportunity because the risks are real and will only get worse before they get better.

Strength isn’t something you tell people you have (related blog: Are you one of the smart ones?)

A strong CIO won’t be strong because they tell everyone “I’m in control, don’t worry. Any and all decisions that even remotely look like an IT decision will be made by me.” Instead a strong CIO is one that builds relationships, one that is taking a leadership position through innovating and is creating an understanding (through action) that IT is part of the business, not a bolt on. I’ve said it before and I’ll say it again: IT wasn’t created to reduce the cost of IT. Until you successfully change the mindset of your business from IT as a cost center to IT as a creative business linked innovation engine, you’re trying to take your canoe upriver without a paddle.

The CIO needs to set the table and serve the food

The CIO must work directly with all the other line of business owners to create a vision for how IT can integrate and innovate from within. Help them understand that without a strategic approach to IT solution adoption and governance, all you’re doing is herding cats through a field of toys (table).  If you successfully capture the risks in the light of opportunity and demonstrate your ability to provide vision—not just risk and cost management—you might be pleasantly surprised by the response. Introduce the new IT as one that is often providing the ideas, and when it’s not it’s supplying the grease that helps ensure non-IT originated ideas, are successful (food).


Professionally Copy Edited by @imkestinamarie (Kestine Thiele) 


BiModal IT – Crashed Right Out Of The Gate!

25 Jun

Crash out of the gate

The idea of BiModal IT is that it would create an IT organization better able to address the support and introduction of solutions in both the legacy environments and the more cloud or agility oriented part of IT.  It’s an interesting if poorly thought out academic vs. real world attempt at a solution by Gartner. However, I’m saying (as are others) that BiModal IT is failed before it ever gets bogged down.

Don’t ever do this to yourself!

If there’s one mistake you should never make as a leader it’s that you should never forget the human part of the equation. Every change you make has an impact on humans, how they respond to that change will determine the long term success or failure of that change, not your quick witted one-liners at the monthly coffee talk.

Where and how does the “human equation” fit in to BiModal IT?

Picture yourself as a Cobol programmer sitting at the conference room table in 1999 and the discussion revolves around what language most applications will be using going forward. At the end of the meeting your boss looks at you and the other Cobol programmers and says “you folks are really important and we need you to stay focused on supporting all those old apps that we’re trying to get rid of”. How does that make you feel? Are you sensing that you’re an important part of the future of the organization and the company or are you an afterthought? To make things worse, if you have half a brain you’re thinking to yourself “what job will I get when the last Cobol application in my company dies? Won’t other companies be getting rid of their Cobol based apps as well?”.

It’s not easy, which is why it’s probably important

BiModal IT is the age old quick fire executive answer to solving problems with layoffs and managing people like they’re chaff.  This is the answer that short sighted executives use because they can’t be bothered to think long term, to apply loyalty or to consider the “human equation”.  Taking the right path as they say, is often the path less traveled. There are myriad ways to solve your move to a modern agile IT organization that don’t involve the wholesale slaughter of ½ or more of your existing team. Will there be struggles, absolutely. Will there be some folks that won’t make it through the change, definitely. Will getting to the other side by doing it the right way be worth it, hell yes.

Additional support from some experts for the idea that Bimodal IT is wrong:

Jeff Sussna @jeffsussna

it’s wrong because it assumes “core” and “edge” don’t need to interact with one another

Tim Crawford @tcrawford

@S_dF @mthiele10 @efeatherston @gartner @jeffsussna (3) It creates a two-class culture within an org (IT) that needs more cohesion. #ciochat

Tim Crawford @tcrawford

@S_dF @mthiele10 @efeatherston @gartner @jeffsussna (2) It limits the overall speed as both arms are needed today…and at speed. #ciochat

Jeff Sussna @jeffsussna

that assumption ignores the entire path of 21st-century digital business

The discussion could go on and on

Suffice it to say attempting to create a modern, agile, innovative culture by creating silos and marginalizing a big part of your team is not the path to success. Take the time to consider how you can foster a pilot that includes a cross section of IT staff (hand selected as potential change agents) who can be considered the genesis of a new “model” organization. Let this new group expand naturally to include more applications and cross functional team members until everyone that can fit is included and those that can’t have been shepherded on to different if not greener pastures.


Assume The Way You’re Doing IT Today Won’t Scale

15 Jun

I did a talk at Cloud Expo NYC on Wednesday June 10th on the subject of DevOps, with my focus being on the organizational considerations required to make DevOps a reality.  During my talk I covered many of the things that any company must consider as they look to become more agile, but one point in the presentation stuck out to many in the audience. The point was my comment that you should “assume that what you’re doing in IT today won’t scale”.

Scale – Isn’t that one of the reasons many of us are adopting cloud?

What’s different, why is it all of the sudden a reality that the methods, organizations, and solutions we use in IT today won’t scale? There are two major factors that I believe are approaching our proverbial doorsteps with ever greater alacrity.

  • Cloud and its impact when considering Jevons Paradox
  • IoT and the million and one new businesses and services that will be generated by XX billions of new internet connected devices


How is the above new? It isn’t, at least it really shouldn’t be. I think the issue for many of us is that we have continued to consider our companies as somehow immune to the need for tools and capabilities that mirror what the large web scale orgs like Google (GOOG), Facebook (FB), Microsoft (MSFT), etc. benefit from. Here’s the news flash, if what I’m suggesting in this blog is even close to accurate we’ve been wrong, seriously wrong. In relatively short order (2-3 years), I expect that any company attempting to remain competitive will need to be leveraging many of the same tools, orgs, & strategies to manage scale just as the big boys and girls do.

At SUPERNAP we’re seeing evidence of this change in demand almost every day. We’re literally building as fast as we can to create the scale and technology ecosystem that the future (here today) is demanding.

Cloud & IoT it’s here, and it’s coming for you! (Consider this yelling from the rooftops!)

Several times over the last five years I’ve written about the potential explosion of IT as a result of the democratization of access to IT and the lower relative cost through the use of cloud. I’m not the only one, in the past few years many of the leading thinkers in the IT and Cloud space have written similar prognostications on growth as a result of ever increasing cloud adoption. New applications and services that face externally will blossom and expand at an ever increasing pace, while innovation internally will require a whole new method for delivering quickly and at scale.  As we use cloud to solve more problems and make IT more consumable, yep, you guessed it; more IT will be consumed. In a blog posted June 12th on by Bernard Golden, he talks about the recent acceleration in server purchases. Until recently many pundits believed we would see an on-going drop in server purchases as a result of more efficient use of resources via cloud infrastructure, instead we’re beginning to see the effects of Jevons Paradox.

IoT is a different problem, but will likely have similar impacts on global demand for IT services. Without going into in-depth use case scenarios just put on your thinking cap for a minute and consider what 20-50 billion new devices being added to the net by 2020 means in the way of potential new services. Consider for a second the resources required just to support XX billion new devices. If you use numbers of “device type vs server requirements” even the most conservative math supports as many as 100 million or more new servers. That’s right, you read me right, 100 million plus by 2020. What happens when we (as we inevitably will) find new and unique ways to leverage the data and capabilities all those new devices in the myriad combinations will create?

Back to where I started

Scale, agility, organization, modeling, analytics, scale, and oh yea, more agility is what we will likely want most in every company and industry regardless of your current size or country location. So, if you’re thinking you’ve got time to contemplate your IT needs over a couple years’ worth of coffee I’m saying you’re sadly mistaken. You need to be actively investigating the options for getting to truly agile and scalable IT immediately and putting aggressive plans in place for how you can as gracefully as possible make the move. There is still time, if you have the guts, heart, and patience to herd your executive team, and your IT staff in the right direction. Maybe my next blog will attempt to outline a few path options.


Are you one of the smart ones – CIO 1.broken through disruption to CIO 2.0

20 May

I just finished reading a fairly good article on the risks CIOs in the UK are feeling as a result of the broader rush to adoption of cloud based services in their businesses. Interestingly enough, it’s not the traditional concern associated with being disintermediated, rather it’s about cloud creating issues relative to connectivity that they can’t effectively plan for.

Every Problem is really an Opportunity in Waiting

In the article the author Bill Boyle says that “almost 48 percent of UK CIOs say that the lack of control over cloud services has made it difficult to predict bandwidth requirements and manage their organizations network effectively – 76 percent of CIOs are concerned that their network will prevent them from meeting business objectives. Furthermore these CIOs are afraid that these problems are already creating strain in their relationships with the CEO, CFO and CMO”.

There’s no doubt that a failure to manage infrastructure requirements effectively can and eventually will have severe consequences to the business, so on that point I agree with the author.

Why I think this is more of an opportunity than a problem              

Imagine no CIO, Imagine no religion, we all just get along. Yeah, could happen. However, in the case of “no CIO” we would be living in chaos and we certainly wouldn’t be getting along with each other.  Here’s a short scenario of no CIO:

Each organization, in fact potentially each person in a company is buying their own IT resources. There’s no framework for data integration, there is no department wide or cross functional value derived from data. Each application, each PC, laptop, and phone is an open access point to the rest of the world. Consultants would run rampant helping you understand how to buy more, while not sharing what they’re doing with other consultants let alone the executive committee.  Costs balloon across the entire company because there’s no one managing licensing, scale, vendor selection, or cross functional projects. Of course there’s no disaster avoidance or recovery plan and there is likely no strategic plan for technology adoption, there goes any hope of using IT as a differentiator. Need I go on? This scenario already a nightmare of gigantic proportions.

The very fact that cloud is impacting the network is a perfect reason for the CIO to take the bull by the horns (the fiber by the connector) and make lemonade out of lemons. Turn that frown upside down as it were.  There are myriad ways to get in front of this issue, but get in front of it you must. This is your opportunity to demonstrate how having a good CIO is the difference between functioning and not, between innovating and dying on the vine. Use the bully pulpit of your position to create process around the adoption, management and measuring of the use of technology. Demonstrate how IT can provide additional value on top of any canned solutions that you buy through helping with negotiations, data integration, and project planning.  The opportunities are endless. Whenever I took on a new leadership role in IT I always thought, I’m so glad the person before made it so easy for me to be successful. This is your opportunity to be successful again, by effectively positioning yourself as the savior not as the roadkill.

Steps to take

Understand and capture the risks – report on it to management. Be transparent and communicate regularly, ideally in face to face conversations. Keep in mind that this isn’t an opportunity to complain about the risks, but rather to discuss how you plan to mitigate them and derive value from the results

Create a roadmap for providing leadership and governance, not control and restrictions

Fire the IT staff that won’t change and hire ones that will embrace owning a services based organization, not a technology based one.  This means you have to change as well, appropriate training and reward systems are key, see below:

Identify a few areas for skunk works – A handful of your resources working on projects that are meant to provide true innovation and consequent differentiation to your business

Identify and implement a handful of tools that allow for improved visibility to new solutions being introduced by non-IT staff

Think Shepard not Sheriff

Find partners that can reduce your dependence on fixed assets like data centers and a hard to change connectivity map, see below;

Think Innovation not cost center

If you can do the majority of the above you’ll have positioned yourself to be the department everyone needs and looks for advice from, not the one everyone makes jokes about.  This really is about leadership, it’s not about technology, and it’s not about schedules or controls. You might have to change the minds of the executive staff regarding where IT fits, but it will help if you’re upfront with the current situation, transparent with how you’ll make the change, and can demonstrate what the future will look like.

Comments Off on Are you one of the smart ones – CIO 1.broken through disruption to CIO 2.0