“It Might Get Damaged”

Over my years as a CIO I’ve heard lots of reasons to not deploy something.  One of the most perplexing to me is the “it might get damaged” argument (or its corollaries, “it might go down,” “it might get hacked,” and “it might get stolen”).  It is very easy to give this kind of argument a very flippant response.  You know, like “and a meteor might fall from the sky and strike us dead”  (one of my personal favorites).  The reality is, though, that there is some interesting and nuanced conversation to be had regarding risk and risk tolerance when thinking about deploying IT solutions.

It Might Get Damaged

Any time you deploy a hardware solution to a member of your community, damage is a possibility.  It might be a small possibility, but it’s a possibility none-the-less.  A desktop computer could be sitting on the floor when the water line in the office breaks (happened to me).  A laptop computer could fly off the roof of the car when you pull away and forget it’s there (also, happened to me).  Your servers could be in a rack in the basement when rain floods it (yup, this one too).  To some extent, these are known possibilities (OK the flooded server room was a bit of a surprise) and are usually figured into your budget for replacements and repairs.  They are acceptable risks given the cost of the equipment.  I don’t know of anybody who doesn’t deploy a laptop because it might get damaged.

Sometimes, though, you have this brand new thing you’ve never rolled out.  It’s hard to know what might cause it to be damaged, and you aren’t sure how rugged it is.  This is worse than increased risk – this is unknown risk.  I try to remind my staff to look at the value of the equipment and the impact if it fails when deciding what to do.  If it’s $1,000 piece of equipment and we have some kind of backup plan, then let’s not worry about it.  Deploy it and use the opportunity to gather data.  If it’s a $200K one-of-a-kind piece of equipment, then maybe we need to do some more research.

It Might Go Down

A CIO colleague of mine says about almost every software project, “we are going live, not perfect.”  If you wait until there is zero risk of unplanned down time on a system, you will never, ever, ever deploy it.  So the trick is to sync up your release schedule so that the amount of risk of the system doing something bad matches your institution’s tolerance for risk.  If you’re not sure what your institution’s tolerance for risk is, that’s a challenge for another post.  Suffice to say, you’ll need to figure it out.

It Might Get Hacked

Ultimately, this is a great deal like “it might go down.”  I know colleagues that spends millions of dollars every year on cyber security and still got hacked.  My entire IT budget isn’t that large, so if they can’t make their systems un-hackable, I know I can’t.  The important thing here is to understand the potential attack vectors, communicate them, and help the institution decide if the functionality is worth the risk outlined.

It Might Get Stolen

Many, many years ago I watched a colleague spend thousands of dollars and many person hours to come up with a way to protect classroom projectors from being stolen.  I did the math, and I think they spent $2,500 per device to protect something that was only worth $1,800.  I guess if you have devices being stolen every week that might make sense (and probably warrants a conversation with your security folks), but if that spending is in response to one theft and a “never again” attitude, you might be in trouble.

It’s Often not about Risk

At the end of all this, there is one other thing to remember.  Sometimes all these conversations aren’t actually about risk or risk tolerance.  They’re about resistance to change and a desire not to get blamed for failure.  The former is a really hard nut to crack, and I tend to just treat it as if the conversation is really about risk.  If you can help your staff (or institution) understand that there has to be some level of acceptable risk, then sometimes the resistance to change will go away.  Trying to avoid blame requires some work as well.  I try very hard to make sure everyone knows when something is experimental (high risk with the expectation of failure at some level) and when it is production (lower risk and less tolerance for failure).  Setting those expectations up front help people understand the point at which things go from “no worries, move on” to “this is a problem.”  The other thing I try and do is make sure everyone understands their roles and responsibilities.  I’ll never be upset with someone for something that isn’t their responsibility or is out of their control.

Helping your staff and colleagues work though these surface issues to find the right risk stance will make a huge difference as you look at IT deployments.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php