It's worse than this makes it out to be. First, my credentials, because I don't have sources to cite: I've worked in IT for 12 years, almost the entire time in enterprise-level jobs. I've worked as a consultant, a manager, and an admin in various for-profit companies.
OK, that said, the state of IT is worse than Bigwig's article makes out, because he only considers outsourcing as it compares to internal IT departments. Internal IT departments are themselves very inefficient. For example, I worked on a project once which spent a year and millions of dollars to build a production environment that was ill-conceived to begin with. When it was finally working and doing what it was supposed to do (for more money and in more time than was actually necessary, but at least it worked), it was immediately taken out of production because the new VP decided to do things differently. This is more typical than not. It has been said that 90% of IT projects fail, and as far as I can tell, that is true.
So why do big IT projects fail? Generally, they are political, which means that cancelling them is also a political, rather than a business decision. They tend to be thought of by the corporate sponsors in business terms without regard to technical considerations, and by the implementors in technical terms without regards to business considerations. There is a strong desire to shave short term pennies by spending long term dollars.
Another example: I once worked on a call and problem management system for doing technical support of software products. The support contracts were fiendishly complicated. Within a few seconds, the person on the phone had to be able to tell the caller if he could or could not place a support call as that person from that company for that product on that computer. This was in the mid-to-late '90s, so the computing power of the enterprise-class machines we had then is somewhat less than the laptop I'm typing this on. But the product we developed for internal use worked, worldwide and for many products and fast enough. Because it worked so well, we were asked to expand it out into the other software support lines within the company. Well, one group had a problem-management system from the time when buying a computer meant that you had lifetime support for free, and it ran on mainframes and cost some $40M every year to run. It was so heavily invested, and had so many applications written against it, that it was deemed too important to get rid of. But their application couldn't do the entitlement piece of support (are you contractually entitled to get support?), so it was decided that their mainframe-based problem management system would be used with our UNIX-based client server entitlement system (the problem management part of our product was to be left to wither and die). To do this, a client had to be developed, taking a year and a half, to talk to both systems. From a user-interface standpoint, the mainframe-based call management system was so bad (and the graphical client basically was a screen scraper for that app) that the support personnel had to increase in order to field the same number of calls. The entire app slowed down, because of the delay of going to two systems instead of one. With the amount of money it took to run that system for one year, we could have finished development of our system (it was rewritten from scratch to work for the whole company, and to eliminate accumulated cruft) and supported it for a decade with constant development. This is common, though: use the more expensive system in worse ways, because we already have it, rather than replacing it with something better. Note: this was considered a great success for the IT group of which we had by then become a part.
To put this in non-IT terms, imagine if you had a car which carried 2 people, got 8 miles to the gallon, and cost $40000 per year in maintenance. Imagine further that you could replace this with a car costing $20000, which carried 6 people comfortably and got 35 miles to the gallon. No-brainer, right? Not if you are in corporate IT, because maintenance comes from a different budget than acquisitions, and it is almost impossible to repurpose funds. (I feel for government heads of department; I really do.)
So what works? Generally, the products I've seen developed by corporate IT which work are those which are developed under a single management chain, generally by a small group of really good people using no formal development methodology, where they are trying to solve real user problems, as opposed to big-picture problems (like, optimize our supply-chain management). These products can grow over time to encompass a larger number of problems for more groups. They tend to go astray when they get big enough and important enough to become politically useful to other management chains. The fight for control is done over requirements, and it results in the biggest fish in the fight getting bigger, and the product getting more expensive and less useful over time.
It's not just me who sees it this way. Most of the IT people I've ever known (and I know a lot of them) see it similarly. (Two programmers at a company I know of were experts in a product. They were transferred to a group with a bigger-fish manager, and put to work on something completely outside of their expertise. The new manager didn't need their skills; he just wanted the headcounts to increase his importance.)
So why don't companies work this way, in general? Well, for two reasons. First, most people in charge of most companies have no clue of how to tie IT projects to business needs. Second, even when they do know how to do this, there is a tendency for projects which are efficient, useful and well-managed to become so important that the power-seeking managers with pointy hair and no clue end up taking them over to stroke their egos, then making bad decisions (if the takeover process was not itself fatal).
Not that I'm bitter, or anything. Time to go catch up on Dilbert.Posted by Jeff at March 26, 2003 12:33 AM | Link Cosmos