For too long, enterprises have tried to do “IT on the cheap”.
By that, I don’t mean that they don’t spend a lot of money on it. I’m also not someone who thinks the amount your should spend is formulaic: “how much others in my industry are spending?” and “what’s the right percentage of revenues?” are two examples of completely irrelevant answers.
An enterprise with historically well-managed IT ends up costing less than you’d expect. An enterprise without that either starves real needs (which periodically rise up and create a crisis), or must “overspend” to rectify past choices.
It also doesn’t matter whether you choose to provision your IT in-house, through outsourcing partners, with the “somewhere in the cloud -as-a-service” vendors, or any mix of these. It does matter whether you get the mix of packaged software (again, installed or delivered -as-a-service) and custom-built elements to match your enterprise’s unique market propositions (and even government departments and not-for-profits have these).
But most of all, it matters that you’ve built in the things that manage your risks.
Business continuity, for instance, is not an optional extra. It is also not a matter of “critical” systems — otherwise known as “those we’re willing to pay for” — and “non-critical” systems: the modern enterprise’s business environment is deeply interwoven and requires its IT systems to be all there. The knowledge to change operations is long gone; the workplace has neither the space nor the people to “go back to the old ways” — and if what this does is truly dispensable, then why are you spending money on it daily?
Security is also not an optional extra. Here we tend to go about it in a way that minimizes cost — and like the “security theatre” we all see every time we visit the airport that fails to deal with the real risks, the way we tend to handle it tends to hold us hostage to fortune.
Like it or not, if it’s stored, it should be encrypted. If it’s moving on the network, it should be encrypted. If something connected to our systems, it should be periodically reauthenticated and analysis done to see that it is following a normal behaviour pattern.
That costs processing power that has to be paid for. It’s instructive that when the US Government recently lost tens of gigabytes of secure data to a hacker from a foreign government, they remain to this day unaware of what was stolen. That’s because although they have the network traffic the hacker encrypted it — and the encryption has yet to yield.
On the application side, the challenge is to cleanly separate what differentiates the enterprise from what it does that’s dead dog ordinary (and despite our pretensions to “this is how we do it here” most things are actually the same in any other enterprise of similiar scope, size and complexity (or should be). Modifying packages, in other words, is the expensive way to solve the problem: changing work where necessary to use unmodified (but configured) packages, coupled with some custom code in components or separate applications (where it allows the package to be upgraded and maintained as cheaply as possible) is the right way to go.
There’s also no shortage of installations out there that think the technology they run on — the hardware platforms, the operating systems and the middleware — is a matter of “being with the right vendor” or “being in the mainstream”. What matters is to run as much as possible (and add to it yearly) on whatever platform-OS-middleware stack gives you the lowest cost to own and operate, regardless of which vendor’s ox gets gored via your changes. A cycle is a cycle; computing is computing; the rest is each of us managing our resumes to show we’re working with what we think others expect of us. (Part of own-and-operate costing, by the way, is paying attention to your local labour markets, and moving off of a technology that’s fallen out of favour locally, no matter how good it is. This is where solution architecture needs to free the interdependencies in application stacks, as opposed to package selection.)
Then there’s information: how many of us have inconsistent and (often) multiple databases ostensibly carrying the same information, but not reconcilable? Do we live with this forever, or do we expend the time, energy and money to clean it up, one piece at a time — and keep it that way?
There are also far too many data warehouse, business intelligence, etc. investments that are filled with data that no one really gets value from — and opportunities lost because providing yet more on an already heavy investment can’t make the cut. These are not “free from care”: they require continuing effort to track and assess value received, and to be refocused as needed.
In an effort to “keep costs down”, how many organizations make people spend hours constantly deleting old emails (which may be required for future legal depositions or regulatory requirements), fill (and then must delete from) shared drives with status documents instead of using modern technologies like wikis to manage the flow, etc.? All of this is pure wastefulness, but because it affects “everyone and no one” there’s no driver for change.
Unless that driver is you. From the CIO on down, IT professionals ought to know where the missing elements are, and where the waste lies. Who else is going to get your enterprise on the right track to effective IT at the lowest cost point, if not you?