Colocation Customer Service & Web Page Load Speeds They’re Related – who knew?

24 Nov

service level meter, isolated on white background

To offer an exceptional service is to not consider the amount of work or the cost but rather the success of those whom you are serving, as through their success comes your own. The aforementioned sentiment is critical to offering any service, but it is especially important to complex critical services that can literally affect your life or the future of your business. You could argue that the success of your data center service is usually not tied directly to your life, but there can be no doubt about its connection to the success of your business.

Considering the Customer First – A Data Center Perspective

Why is considering the customer first so important? It’s my belief that attempting to satisfy your financial analysts or external influencers is counter to providing an excellent service. Pleasing your bank before you please the customer is like defining a project without understanding the deliverables. While this customer first approach is likely to work in every business, it is especially important to the data center business where a lack of focus can mean an enterprise going out of business over night.

Many data center providers are getting out of the business of operating data centers because it’s not a core competency.  Building a data center business isn’t an overnight success type play. The data center business is hard, it takes focus, day in and day out operational excellence and a solid understanding of how the data center creates business opportunity or conversely can generate risk.

When you’re in the market for new or replacement data center capacity you should take a moment to think about your needs outside the box of “power & cooling”.  Consider the fact that you are likely to sign a contract of three years or longer and that your capabilities (as a company) will be tied to this new partner’s ability to execute. Customer service is much like web site performance, a ¼ second longer page load speed doesn’t seem like much, but the cumulative impact is more waste work done on the back end and a reduced level of productivity on the front end.  In Google’s case they found that just a small added delay (200 MS) meant a dramatic reduction in search page use. The simple truth is that every minute you’re not finished with a project is a minute of lost opportunity, however you measure it.

Related blog: Speed Matters

Customer service is tied to dozens of individual activities in the data center from the initial on-boarding process to the quick and effective response to ticket requests. Equally important are the higher end customer capabilities associated with the provider being ready for your business needs when you’re ready to execute.

Related Blog: Consider Connectivity Compulsory

The Difference

It’s becoming more and more obvious that enterprises shouldn’t be building their own facilities. However, as you go about reviewing the dozens of options you have for accommodating your future data center requirements, keep in mind that the effort is much more than just selecting an appropriately located warehouse via a real estate broker. Consider the entire service ownership of a data center and the impact a poorly run facility with inadequate customer service might have on your ability to succeed.

There’s moremissed opportunity Jackson Brown

There’s obviously more than customer service required to provide a successful data center service, but if you can’t get great customer service it doesn’t matter what other capabilities the provider offers. Once you’ve identified a supplier(s) that might work as a true partner you can then measure them against the other requirements for successfully future proofing your decision and doing your best to guarantee that the data center is an enabler of and participant to your growth, not an anchor.

Related Blogs:

Data Center Future Proofing

Data Center Growth Enabler

Comments Off on Colocation Customer Service & Web Page Load Speeds They’re Related – who knew?

Your Future is Globally Distributed Cloud Native Applications or is it?

10 Nov
Discussion Slide – Drivers for IT Strategy

This graphic is just a simple way to look at application distribution in a Netflix type business vs. the average Enterprise. This distribution will most often dictate where you prioritize your efforts, how staffing will be shared and what some of the drivers for change might be.

There is a ton of discussion these days about whether there is an on-going need for reliability in IT hardware (infrastructure) up through the data center.  It seems that many of us are trumpeting the notion that all applications will not only be in the cloud next week, but they will be designed to work effectively in a global geo-distributed design.

The idea – Resiliency in software allows you to reduce complexity and cost in the hardware

An application appropriately designed as cloud native for geo-distribution (I.e., Netflix, Google Apps, etc) can work very well in a globally distributed format. However, most of the applications housed in an enterprise data center don’t fix the same profile.  The goal to support global customers with applications that are designed with distribution and resiliency is a good one. However, being geo-distributed in and of itself shouldn’t be the goal.  What’s important to your business should be how an application delivers specific business and economic benefits.

There is no “One” application

Within the enterprise you have a matrix of application types (Finance, Sales, Marketing, Engineering, Inventory, etc) in combination with the number of customers per application, and the location of those customers. Then you need to add to the matrix any elasticity requirements, the data base type, development requirements vs available talent, and so on and so on.  The basic point here is that the vast majority of enterprises today are not single application companies and they likely won’t be any time soon.

Each application type has a fit

The average mid-sized business or larger has from 100s to 1000s of applications. Some of these applications serve ten people, some of them serve the majority of the employees and a few are externally facing where they potentially serve millions. Some of the applications in each category will be more latency sensitive and others will have a monolithic data structure.  What I’m trying to say here is that no universal design, ownership strategy, or use characteristic exists from a management and planning standpoint for all of your company’s applications. You cannot make assumptions about whether an application would be better in a cloud environment (any cloud environment) until you understand the applications characteristics. The matrix of characteristics that should help drive decisions around what happens to an application should always start with the customer. If the customer benefits by the move to cloud or at least isn’t negatively impacted then you have a place to start.

Thinking software resiliency vs. hardware redundancy

Those applications that are designed for the cloud and geo-distribution should be designed to withstand basic infrastucture outages. If the application spans more than one data center the assumption could be that any one data center could be unavailable and your customers wouldn’t notice. While this “software resiliency” goal is achievable it’s not usually easy. In the case where you have applications that need to be in all locations at once and where resiliency was designed in from scratch, the investment you make in hardware redundancy could potentially be reduced. Also for consideration is your teams depth of expertise in dealing with a new “black box” application environment, whereas you might already have access to highly redundant infrastructure.

Hardware redundancy in the IT gear and the Data Center

I don’t see the need for infrastructure redundancy going away anytime soon (10 years or more).  There are any number of drivers that are making our IT infrastructure more and more critical to our daily lives and businesses. Some of those drivers include bigdata as an integrated part of everyday applications, and large scale storage in generally. Storage (data & infromation) is where the value is, not the application. If an application has major data base requirements then it’s more likely that the application will be housed close to where the data resides. In the majority of cases existing application and data base design don’t allow for easy geo-distribution. The monolithic nature of data is exacerbated by meta data and or the need to have an entire answer to a request be in one place. In other words, you can’t just take the entire data set and distribute it in fifths between five locations and say we’re good. In many cases because of latency and the rate of data change (how much data is touched or manipulated by customers per minute) it’s also not feasable to replicate in real time your data base to five different places.  So, if you can’t ensure that your application will run regardless of which piece of infrastructure (server, network device, storage unit, fiber connection, or data center) goes offline, then you need to consider protecting the source to the best degree possible withing your businesses cost of downtime calculations.

Hardware vs. Software resiliency

As I’ve already said, I see it as highly likely that resiliency in the software layer will become more an more prevalent over time. We will get better at application design and data distribution and this will in turn broaden the market opportunity. Hardware redundancy is much better understood and in many cases is more cost effective to implement and manage than software resiliency, meaning that while less sexy the obvious answer is still sometimes the best answer.

In the case of a data center the security and protection enabled by highly redundant design combined with operational excellent mean that you can expect many years (10s to 100s) of uninterrupted service. In 15 years of operations at Switch SUPERNAP our customers have enjoyed 100% uptime. This uptime is available for all their applications whether they are globally distributed (<5%) or more traditional in design.

Moral of the story

Don’t over complicate your life unless the return on the investment is really obvious. There are many applications in your portfolio that might benefit from global distribution but there are many, many more that won’t benefit at all. In fact, redesigning them and introducing new risk is counterproductive at best. Listen to the pundits, but be sure to balance what they say against the realities of your environment. The answer to the question in the title is “it depends”. There are compelling reasons for why some applications should be designed for the cloud and globally distributed, but for a bulk of the applications in the average enterprises portfolio there are good reasons to go slow. In closing I’m not for or against any IT solution whether it’s a TIER IV data center or a Cloud Native application, I am for using the correct assumptions and effectively weighing risk/opportunity when selecting something.

Comments Off on Your Future is Globally Distributed Cloud Native Applications or is it?

Data Center Trends – Colocation & Sustainability; Can They Coexist?

28 Sep

The data center industry has made some big moves over the last two years to demonstrate a desire to create more energy sustainable operations.  Apple, Facebook, Google, ebay, Microsoft,   Amazon  and others have each made moves that will make a real dent in their companies carbon emissions.  The choices for going green are myriad and are demonstrated by the diversity of options selected by the aforementioned companies.  However, it seems my industry (colocation) has been slower to pick up the sustainability baton.

Colocation Providers

Many colocation companies have been slow to reduce their PUE (Power Usage Effectiveness), and they’ve been even slower to make any serious moves into the sustainable/renewable energy market.  While there are a few notable exceptions, I struggled to find more than a couple handfuls of the approximately 1086 colocation companies from around the globe that had made any serious effort to use more renewable energy.  As of December 2014 there were 3685 individual data centers in the colocation space.  Assuming an average data center size of 5 MWs there are over 18,000 MWs of energy being consumed per hour by these 3685 data centers.  The carbon emission estimate- assuming 70% or more of the 18,000 MWs are generated through coal- would be *79,000,000 tons of CO2 every year.

Efficiency vs. Sustainable

Efficiency is a great place to start and there are still thousands of colocation centers that have PUE values of over 2.0 (depending on regional climate issues a good PUE should be 1.2 – 1.45). This means on average colocation facilities are burning 30% plus more energy than they should be to support their hosted IT equipment.  This inefficiency is bad enough, but when you combine it with the dirty energy mix that most of these 3685 data centers run on the story only gets worse. Consider that a data center running at a PUE of 2.0 on natural gas produces less carbon output than a 1.4 PUE data center running on coal generated energy.  Combining a high PUE with a dirty energy supply just exacerbates the situation.

Related blog: Sustainability Applies to Data Center Facility Design

Roadblocks Real & Assumed to Positive Change

While I can’t and won’t attempt to speak for every colocation provider, I can discuss some of the things I’ve witnessed during my time in the industry.  Many colocation providers act as if they are merely recipients of customer workloads, unwilling and seemingly unable to make changes that the customers aren’t demanding.  One simple example of this unwillingness to change is the fact that most data centers are still being built with raised floor, which is being used as the cold air plenum.  Cold air falls, hot air rises, why would you push cold air up? Another issue is the lack of scale that many of the providers have in any one location.  This lack of scale means any individual location or small provider would struggle to force real change on how energy is sourced in their area.  It’s also true that a large number of existing colocation facilities still don’t isolate their hot and cold spaces, which means they are mixing the air and causing unnecessary inefficiencies.

Switch SUPERNAP Makes a Bold Move on Renewable Energy

The SUPERNAP data centers have been known for their efficiency for some time (annualized PUE of 1.18). The data centers historically have been more sustainable because of the cleaner energy mix provided off the grid in Nevada and especially in Las Vegas where we’ve been based.  The latest Switch SUPERNAP sustainability news though has made me even prouder to be a part of this organization.  With efforts underway to build 100 MW’s of solar in Nevada starting in October 2015 our stated goal is to be G100 (100% on Renewable energy). We became the first Nevada company and first data center in the country to join President Obama’s climate pledge and we’re also working with elected leaders in northern Nevada to use 100% recycled effluent in Reno and partnering for new water technologies in Las Vegas and elsewhere.

The Opportunity to Make Real Change is Here

Our industry needs to come to grips with the combined impact of so many data centers on our carbon emissions and water use. In some geos we still see 100% of energy supplied by coal and in other areas companies are running inefficient facilities from a water perspective where there are on-going droughts.  I see tremendous growth in the data center industry continuing for some time to come and Data Center Dynamics estimates a CAGR of 9.3%.

Related Blog on Growth: Data Center Trends – Have you Fallen behind Already?

Change is needed and there’s no better time than now. I call on my fellow colocation industry folks to accept the challenge to reduce energy use through improved efficiencies and increase the adoption of cleaner and or renewable energy.



Supporting Data:

Support for the 79,000,000 tons of CO2 come from the measures on this page compared against 18,000 MW ours from over 3600 data centers with average coal use of 70%

Comments Off on Data Center Trends – Colocation & Sustainability; Can They Coexist?

Data Center: Who Knew Saving Cost & the Planet Could be so Easy

01 Sep

How can a 1 MW of IT load data center kill a company’s financials while ruining their reputation and seriously but unnecessarily pollute the planet?

Related Blog: Switch Becomes first data center provider to join White House American Business Act on Climate Pledge

The average data center Power Usage Effectiveness (PUE) rating across the industry is around 1.7*.  While the best data centers in the industry are running at under 1.2, the vast majority of enterprise (internal) facilities are running at a PUE of between 1.7 and 2.5.  At a PUE of 2.5 the operator is using more than 2X the power needed for the same workload that a data center running a PUE of 1.2 is; consider that math across multiple megawatts of power use.

High Cost of a high PUE

There are at least three major areas of cost and risk associated with running an unnecessarily high PUE:

  1. Wasted power: 1 MW per year at .08 cents per kWh equals $700K per year

With a 2 MW facility at a PUE of 2.0 you’re wasting $560K per annum versus a 1.2 PUE data center with the same IT load (80% of $700K = $560K).

  1. Wasted power and cooling equipment: the gear to support 1 MW is roughly $3 million (assuming generator and UPS). Costs potentially increase if you have a higher “TIER” rating.

If you’re running at a PUE of 2.0 on a 1 MW IT load environment, you’re spending roughly $1.6 million extra in capital in a TIER II type design.

  1. Total Cost of Ownership (TCO): The extra power delivered and the extra power equipment come with overhead and replacement costs. On-going support of UPS, transformers, PDUs and generators combined with any special costs associated with substation construction etc.


All of the above costs can easily add up to more than $1.2 million a year in wasted energy and maintenance bills. CapEx losses would depend on depreciation cycles but would average approximately 300 thousand per year. Total wasted spend $1.5 million per year for a 1 MW of IT load data center. With each additional MW of IT load you’ll lose another $1.5 million per year.

Related Sustainability Blog: To meet accelerating DC growth server cabinet density must increase

Cost number 4 gets its own section

Besides the more obvious costs as listed above there is the financial, PR, and environmental loss associated with your businesses carbon emissions. It’s difficult to put a specific value on bad press, but generally speaking unless you’re in show business, bad press is bad. Assuming your data center gets its energy from coal fired power plants you would be generating roughly 110 million pounds of carbon dioxide for every megawatt year.  In the example above where you have just 1 MW of IT load, but are running at a PUE of 2.0 on coal it equates to creating roughly 90 million pounds of unnecessary CO2 emissions every year.

What can you do?

The tasks associated with getting your PUE down to 1.2 from 2.0 or higher and moving to a new energy blend are both difficult, potentially costly and time consuming. There are dozens of blogs that have been written on the subject of PUE improvement and or energy saving tips. Your ability to succeed at making a change to either your PUE or your energy blend will depend on many variables, including; size of your business, the country, state, county or city you’re in, the age of the DC, and the capability of the facility. Consider your investment in improving an existing facility against the real world possibility that you have any more than 2-3 years of useful life left when you consider business and or technology change.

Related Blog:  Five Reasons Why Server Cabinet Power Density is Going Up!

There are no cost free options for making the required changes to reducing your PUE, lowering your costs and improving your Corporate Sustainability Report (CSR). However, there are efficient and long term options for giving your company an immediate and positive shot in the arm.  The simple answer is; move your IT gear to a data center where they’ve already solved the problem of PUE and clean energy.

Related Blog: Consider Connectivity Compulsory (Another area of major savings opportunities at SUPERNAP)

There are other sustainable and low PUE colocation facilities out there, but there aren’t any in the world that can provide the capabilities and performance that Switch SUPERNAP can.  With industry leading PUE numbers and a continued push to 100% green energy there are few things you could do for your company to reduce overhead and lower emissions with the same positive immediate impact of moving in to a SUPERNAP.

Supporting & or Related Blogs:

Data Center: Have you Fallen Behind Already?

Data Center Efficiency, We’re Done Here

How much carbon dioxide is produced when different fuels are burned

Think Strategically about Data Centers

Uptime Institute Survey of Industry wide PUE

US Energy Information Administration

Managing Your PUE

Comments Off on Data Center: Who Knew Saving Cost & the Planet Could be so Easy

IT Is More Relevant Than Ever – At Least It Can Be

03 Aug

I just wrapped up a #CIOitk (In The Know) panel with Bob Egan (@bobegan), Stuart Appley (@sappley) and Tim Crawford (@tcrawford) on the topic of whether IT is moving towards irrelevance. We covered impacts, conditions and potential strategies in our wide ranging discussion but the common agreement was that the role of IT in the business is more necessary than ever. The real issue is whether the IT org you’re in has a future, not whether IT itself has a future. This opinion that IT remains as relevant as ever was supported by most of the folks following the conversation on Twitter. I highly suggest poking through the tweets of the cast and audience using the hashtag #cioitk as there were some really bright folks contributing.

An organizational perspective

Where do we target change in the organization? The first point that was made by several of us on the panel was that the target for change isn’t just the office of the CIO.  So, if it’s not all the CIO’s fault, who else should we be targeting?  Where does the buck stop; that’s right, the CEO and his/her C-Suite partners.  You hire for what you think you know needs to be done. If the CEO/CFO are hiring someone to keep the lights on or manage IT costs then you’re going to get a manager not an innovator.  How would you hire someone if the requirement was to take your money and make something big happen with it? How would you hire a CEO for a new startup? These are the type of positioning questions that we need to be asking when we’re scouring the land for our next CIO.

Related blog: (Misconceptions of the CIO role)

Cost & Risk Management vs Faster and Innovative

There’s no doubt that cost and risk are areas of concern for a CIO, but truthfully they are for every senior leader, but would you put risk or cost management as your top criteria for hiring a new CMO? IT has the keys to making your business go faster and to applying innovation across all functional areas. IT is by its very nature innovative. One could argue that many of the most successful modern companies are nothing more than giant IT organizations that are constantly innovating new products and services (Facebook, ebay, Microsoft, Intuit, Google, Amazon, etc., etc.). Why would you take your most important weapon for differentiation (IT) and hobble it with a “make sure you don’t break anything or cost too much” mantra? Every company today should be looking for speed, agility, and innovation; IT can and should be helping in all those areas.

Related blog: (It’s about Role Change Not Role Elimination)

Shake it up!

Tear apart every job description you’ve ever created or purchased for the role of CIO and build one like you’re starting a company and you want someone to lead it and be a visionary who will be constantly innovating its products and services in support of the rest of the company.  If you really need someone to manage risk and cost, hire that person but don’t assume they can or should also be a CIO.

Shout out to @amyhermes for her help with graphics, content and outreach for this panel! 

Related blog: (Innovation vs. Cost Center)


Data Center Trends – 5 Reasons Why Server Cabinet Power Density is Going UP!

27 Jul
Switch High Density Racks

Server Cabinets in the SUPERNAP with 28kW delivered

Ready or not (and most aren’t), power density in the rack is going up, and not incrementally over ten years, but dramatically over three to five years.  Can your internal data center(s) support that? Can your partners support it? My rough estimate tells me that if an average of 10kW per rack was required, fewer than 10% of data centers in operation today could handle it.

Why the Screaming?

There are a confluence of events occurring that are driving infrastructure design towards more density, and I don’t see anything reversing that trend anytime soon.

Reason 1: Converged infrastructure (more gear in a smaller space)

  • Right now, converged infrastructure sales are going through the roof. Whether you’re buying from VCE, HP, Dell, NetApp, Nutanix or SimpliVity, the story is the same…more power Scotty!
  • UCS Chassis – 2.0 kW – 6 chassis per cabinet = 12kW
  • HP Matrix Chassis – 6.0 kW – 4 chassis per cabinet = 24kW
  • Neither of the above examples are configured at max potential power consumption.

Related blog “Future Proofing”;

Reason 2: Open Compute and or HPC Oriented Infrastructure designs

  • Modern infrastructure is taking on more of the characteristics of High Performance Compute designs. There are a number of reasons for this change towards HPC, but one of the main drivers is the increasing use of big data and reliance on big data as part of critical infrastructure/application environments. Simply put, there are real performance gains associated with putting more compute in smaller spaces right up next to your disk farms.

Reason 3: Cost & Space – Sustainability

  • I’ve always liked density because I’m an efficiency guy. There are some obvious benefits to having your gear packed in to smaller spaces: better overall performance (efficiency) and less floor space, cabling, and racks (efficiency and cost). There’s also the long term sustainability (blog) There are estimates being made today that suggest cloud and IoT will drive our global IT power consumption footprint from 3% to as much as 20% of total power generated. If we continue to populate the world with low density data centers you’ll soon be able to walk on top of them all the way across the country.

Reason 4: Containers

  • A key push behind the use of containers after ease of deployment, etc. is efficient use of infrastructure. A big part of this efficient use of infrastructure comes from a much higher average utilization of the CPU. This higher utilization will inevitably result in greater power use per rack unit.

Reason 5: Moore’s Law slow down

  • It might seem weird to think that a slowdown in Moore’s law would equate to higher power density, but I believe it does for at least two reasons. Disk and Networking continue to advance at Moore’s law or faster. This advance will mean: 1) More CPUs/Servers will be needed per square foot to support bigger and faster disks and fatter, speedier networks, 2) we’re less likely to see that same decreases in power draw on the CPU as a result of the reduced iterations of x86 chips.

Related blog “Not Your Daddy’s Data Center”:

What am I getting at?

I expect that over the next three to five years many data center owners will discover that their facilities won’t support what their IT staffs are demanding; and even worse, many of the providers you speak to won’t be able to support your need either. Here at the SUPERNAP, we’re seeing the demand for density increase with a large percentage of our customers. We’re already doing thousands of racks at over 15kW delivered and many that are much denser with the trend accelerating.

Professionally Copy Edited by @Imkestinamarie (Kestine Thiele)


Cloud Native Computing Foundation – My Short Take

21 Jul

If you haven’t already seen the Cloud Native Computing Foundation (CNCF) press release from today, you should definitely take a quick look.  I see this as a big step in the right direction for open-source and cloud in general.

What’s different about the CNCF Movement

Application containers and cloud designed applications are two of the hottest areas in technology today and there’s good reason. Application containers are likely to be to modern infrastructure what vmware is to legacy infrastructure, only in half the time.  The move to develop standards and create a foundation around containers are actually already ahead of where we are with virtual machines.  Foundations like CNCF and standards bodies bode well for containers and developers to be in five years where it took virtualization ten.  In fact, in some areas of the ecosystem I’m willing to bet that three years from now the management and utilization of application containers will be ahead of where virtualization was after ten years.

100 Containers for every CPU

The future looks bright for containers and cloud designed applications and frankly it’s just in time. In my recent blog “Assume The Way You’re Doing IT Today Won’t Scale?” I talk about the drivers that require IT teams to reconsider the timing relative to their being ready to live DevOps and design infrastructure and applications with agility. The tools in the supporting and rapidly growing ecosystem (Kubernetes, Docker, COREOS, RedHat, Google, Mesosphere, Intel, SUPERNAP, Joyent and more) will help position IT teams to leverage application containers for critical production environments within a few short years.

Those enterprise groups that want to continue to own all or part of their infrastructure will need to manage against the demands of running infrastructure at much higher utilization along with dramatically decreased deployment times.  The vast majority of highly virtualized environments today are running at below 40% CPU/System utilization. In application container environments we’re likely to see the utilization spike to more like 75 or 80%. Higher utilization likely means more demands on I/O, memory, and network. The ability to effectively manage more “calls” from each CPU to the infrastructure will be a key in broad enterprise ready deployment of application containers.

Containers aren’t going away

I feel it’s pretty safe to say that application containers are not a passing fad. There are still many hurdles before they become production in the way a complex enterprise needs them to be, but the right “Foundation” and standards are being put in place to help ensure success.  It will be interesting to see the progression of this old/new space and to see how existing highly invested cloud providers adapt to make the most of the new capabilities to leverage resources more effectively and efficiently than ever.

Comments Off on Cloud Native Computing Foundation – My Short Take

Data Center Trends – Have you fallen behind already?

07 Jul

In my recent blog (Assume the way you’re doing IT today won’t scale), I talked at length about the two macro drivers that I see having a significant impact on the IT infrastructure we build, buy, or lease. These macro drivers for infrastructure & applications will likely manifest themselves through one or more of the following changes:

Change: More of your business will be outwardly facing

Today the average enterprise has roughly 5-10% of its applications facing externally. The other 90% plus of applications provide service to the internal employees and or some number of contractors and suppliers. The usage characteristics of the 90% applications are generally well understood and mostly fall within a demand variance of plus or minus 20% on a monthly basis from an elasticity point of view. Many of the applications support global partners and employees; additionally, the core customer group is fixed and is usually housed close (under 50 ms latency) from the applications.

For the sake of simplicity let’s call the application sets:

  • “the 90%”
  • “the 10%”


The aforementioned application breakdown information isn’t new, and many IT folks already understand it to be true. So why did I think it was important to call out? Keep reading.

I expect that almost every organization will see a dramatic change in the distribution of applications between the 90% and the 10% categories. Here’s why:

  • Virtually every company will be finding a way to leverage IoT. The use of IoT will generate new business to business opportunities at a scale that most of us can’t even begin to understand today.
  • Cloud based applications will continue to drive more features and functions out to the customer. That’s right; you’ll have people using your applications from the myriad devices in their homes and environments that they interact with on a daily basis. More of your marketing, product development, and customer acquisition/relationship strategies will come directly from interaction with your customers. Many companies are already using tools to get closer to their customers but expect this trend to accelerate.


The above changes to the 90/10 distribution will likely have a profound effect on how each of the applications are designed, provisioned, and supported. It’s possible that the total number of applications used by internal staff won’t change significantly. However, you should expect that the total number of applications in your portfolio will increase and that your distribution will start moving towards a 70/30 breakdown.  This new trend means:

  • The use characteristics of 30% of your applications will be driven by the vagaries of a very large customer population, which means geo distribution and elasticity likely become more critical.
  • A larger portion of the 70% will likely need to take on the characteristics of cloud designs as the lines between internal employees, partners, suppliers, and customers blur even more than they already have.


Change: Increased use of IT for competitive advantage

The competitive advantage movement should not be underestimated. I fully expect that within 3 to 5 years, all but a few businesses and organizations will have come to realize that leveraging IT more effectively is their key to success, and for that matter, their very survival.

The increased use of IT for competitive advantage is the typical “can’t put the genie back in the bottle” scenario. As more companies learn to forego their legacy shackles and embrace big data, IoT, and cloud based infrastructure to increase innovation, drive customer connections, and leverage speed, everyone will want and, in fact, need to be part of the change.

What does this all mean?

We need to start thinking of our IT organizations more like a combination of a racing team and a massive logistics center. The racing team delivers speed, services and innovation, the logistics center ensures you can identify, obtain, and put the best resources available to use in real time anywhere in the world. You can’t enable the type of IT team I described above by sprucing up the company data center and or hiring a few more sys admins and network engineers.  You definitely can’t enable the right IT strategy by attempting to extend the legacy applications and strategies of the past.

Related blog: Why we Value IT Incorrectly – Innovation vs. Cost

Think services, supply chain, ease of acquisition, and customer impact potential. Buying IT solutions for cost or because they fit with your legacy suppliers and technologies just isn’t going to cut it. Consider your data center(s) as an example. The data center strategy & associated partners will become a future proofing resource that allows you to adopt or distribute your IT workloads anywhere, with any design at any scale. Leveraging an ecosystem (blog) type supplier that provides broad access to key solutions and services while enabling a future proofed platform for growth is imperative.

Professionally copy edited by @imkestinamarie (Kestine Thiele)

Comments Off on Data Center Trends – Have you fallen behind already?

The CIO Discussion – It’s about role change not role elimination

29 Jun

We’ve all heard, “IT the department of No,” or we’ve read Dilbert comics that joke about the problems with IT departments and specifically the CIO.  There are also the ever fashionable comments like, “the role of CIO will be moved to the head of Marketing or the CMO position.”  We can all buy cloud or SaaS applications with a credit card, so why do we still need a CIO to get in the way of progress?

Are we really that stupid short-sighted?

Why would it be short-sighted to get rid of the CIO? Let’s take a short trip back to 1997 and take a look at the average department. Let’s see what was going on relative to IT to get an explanation. What do you see (besides a bunch of frustrated users with terrible applications)? In 1997, we were still very early in the days of the internet relative to it being a commonly used resource for companies beyond being a “home” page. However, most departments saw some potential in having a website, and as such, they were paying external contractors 10K plus to build them a page. It didn’t matter what the page would be used for, we just had to have one. There was also the rampant use of solutions like MS Access. There would be a key person in the marketing or sales department that would put MS Access on their PC and build a data base. The web pages and Access DBs and other one off applications were proliferating like wildfire, and there was little if any governance on cost, security, data sharing, application integration, etc. The lack of governance meant that with many of the above mentioned solutions, they became one or all of the following; a support nightmare, a critical resource that now has to be connected to the ERP system, or a failure that just ended up being a waste of resources in cash and time.

Fast forward 18 years to 2015. What’s different? The applications have gotten a little better, and most web work can be done very cheaply. The work is usually managed centrally for a company or large line of business; however, now we have SaaS applications that are being designed, managed, secured, and delivered from outside the corporate network. You have the ability to acquire cloud resources in minutes with a credit card where you can then install an application and or store data. Now tell me, what’s the real difference between 1997 and 2015? There really isn’t a different, except that the person with the credit card is much more dangerous to the organization than they were in 1997 because they need less IT understanding to get started.  In some cases, we can call these acquisitions shadow IT; in others, they are misguided if not well-intentioned efforts to move more quickly than the CIO’s budget or his/her attitude will allow.

Why the CIO position is as relevant as ever

Without governance, the level of risk that enterprises carry when everyone is buying their own IT solutions is very high.  Each time an employee buys access to a SaaS application, they are committing the company and the company’s secrets. Each time another person buys a SaaS application that does the same thing but from another vendor, you have missed opportunity. You also have the risk of wasted licensing, little to no data integration, and one or more of these applications going from useful to critical without the necessary protections, planning, and support. The risk with buying IaaS and building applications outside the purview of the IT group is just as bad as with SaaS. What happens when Corporate data is shipped offsite without appropriate governance? Who knows the data is outside the corporate network, and who will manage its lifecycle, sovereignty, security, and recovery? What about a person with the experience to validate the qualifications of a supplier? These risks and many more are risks that need to be mitigated. Are we to assume that the CMO will have knowledge or interest? The only position really qualified to attempt to support the company as a whole in this new shoot from the hip world of IT is a strong CIO position.

However, a strong CIO may not be enough (related blog post: Why we Value IT Incorrectly )

Even with a strong CIO, there will be issues if there isn’t broad support from the c-Suite. It’s also critical that the CIO be innovation-oriented and secure in accepting that not all IT oriented solutions need to come directly from his/her team.

There is no comprehensive management suite available today that I know of that would allow a CIO to easily absorb new solutions acquired from non-IT groups. Personally, I don’t know what the right type of tool would be. It would seem that a great tool might be a beautiful control panel that would allow you to just drag and drop new applications and sort through the assorted data sharing, policy, and security issues. However, as an IT person who has historically shied away from the “one tool fixes all problems approach,” I have my concerns about the aforementioned. Maybe it’s more of a logical recording of what’s going on. I don’t know what an application would look like that supports this “logical recording” strategy; it’s just a thought.

In the end, it always boils down to you can’t manage what you can’t measure.” I actually see the space of IT governance as a real disruption opportunity because the risks are real and will only get worse before they get better.

Strength isn’t something you tell people you have (related blog: Are you one of the smart ones?)

A strong CIO won’t be strong because they tell everyone “I’m in control, don’t worry. Any and all decisions that even remotely look like an IT decision will be made by me.” Instead a strong CIO is one that builds relationships, one that is taking a leadership position through innovating and is creating an understanding (through action) that IT is part of the business, not a bolt on. I’ve said it before and I’ll say it again: IT wasn’t created to reduce the cost of IT. Until you successfully change the mindset of your business from IT as a cost center to IT as a creative business linked innovation engine, you’re trying to take your canoe upriver without a paddle.

The CIO needs to set the table and serve the food

The CIO must work directly with all the other line of business owners to create a vision for how IT can integrate and innovate from within. Help them understand that without a strategic approach to IT solution adoption and governance, all you’re doing is herding cats through a field of toys (table).  If you successfully capture the risks in the light of opportunity and demonstrate your ability to provide vision—not just risk and cost management—you might be pleasantly surprised by the response. Introduce the new IT as one that is often providing the ideas, and when it’s not it’s supplying the grease that helps ensure non-IT originated ideas, are successful (food).


Professionally Copy Edited by @imkestinamarie (Kestine Thiele) 


BiModal IT – Crashed Right Out Of The Gate!

25 Jun

Crash out of the gate

The idea of BiModal IT is that it would create an IT organization better able to address the support and introduction of solutions in both the legacy environments and the more cloud or agility oriented part of IT.  It’s an interesting if poorly thought out academic vs. real world attempt at a solution by Gartner. However, I’m saying (as are others) that BiModal IT is failed before it ever gets bogged down.

Don’t ever do this to yourself!

If there’s one mistake you should never make as a leader it’s that you should never forget the human part of the equation. Every change you make has an impact on humans, how they respond to that change will determine the long term success or failure of that change, not your quick witted one-liners at the monthly coffee talk.

Where and how does the “human equation” fit in to BiModal IT?

Picture yourself as a Cobol programmer sitting at the conference room table in 1999 and the discussion revolves around what language most applications will be using going forward. At the end of the meeting your boss looks at you and the other Cobol programmers and says “you folks are really important and we need you to stay focused on supporting all those old apps that we’re trying to get rid of”. How does that make you feel? Are you sensing that you’re an important part of the future of the organization and the company or are you an afterthought? To make things worse, if you have half a brain you’re thinking to yourself “what job will I get when the last Cobol application in my company dies? Won’t other companies be getting rid of their Cobol based apps as well?”.

It’s not easy, which is why it’s probably important

BiModal IT is the age old quick fire executive answer to solving problems with layoffs and managing people like they’re chaff.  This is the answer that short sighted executives use because they can’t be bothered to think long term, to apply loyalty or to consider the “human equation”.  Taking the right path as they say, is often the path less traveled. There are myriad ways to solve your move to a modern agile IT organization that don’t involve the wholesale slaughter of ½ or more of your existing team. Will there be struggles, absolutely. Will there be some folks that won’t make it through the change, definitely. Will getting to the other side by doing it the right way be worth it, hell yes.

Additional support from some experts for the idea that Bimodal IT is wrong:

Jeff Sussna @jeffsussna

it’s wrong because it assumes “core” and “edge” don’t need to interact with one another

Tim Crawford @tcrawford

@S_dF @mthiele10 @efeatherston @gartner @jeffsussna (3) It creates a two-class culture within an org (IT) that needs more cohesion. #ciochat

Tim Crawford @tcrawford

@S_dF @mthiele10 @efeatherston @gartner @jeffsussna (2) It limits the overall speed as both arms are needed today…and at speed. #ciochat

Jeff Sussna @jeffsussna

that assumption ignores the entire path of 21st-century digital business

The discussion could go on and on

Suffice it to say attempting to create a modern, agile, innovative culture by creating silos and marginalizing a big part of your team is not the path to success. Take the time to consider how you can foster a pilot that includes a cross section of IT staff (hand selected as potential change agents) who can be considered the genesis of a new “model” organization. Let this new group expand naturally to include more applications and cross functional team members until everyone that can fit is included and those that can’t have been shepherded on to different if not greener pastures.