« Can't Keep up with Catastrophe | Main | Refining the Process »

February 11, 2007

Why Software Tends to be Bad

Designing and building good software is difficult. If you doubt this, answer the following:

  • Do you have/have you used more software with bad interfaces (confusing, hidden features, too much exposed functionality, weird tab orders) or good interfaces (clean, consistent, exposing what is important and hiding power-user options)?
  • Do you have/have you used more software with or without noticeable bugs or design flaws? How about with or without occasional major bugs or design flaws that cause lost data, including application crashes?
  • How often do you just give up on using software because you can't figure it out, or it's too cumbersome, or it behaves badly?
  • How often do you need technical support to get beyond the most rudimentary features of your software?

The problem is that software is terribly prone to failure. Engineers express the degree to which a mechanical system is prone to failure in terms of the number of moving parts and the total number of parts, moving or otherwise. A moving part (think of the wheel bearings in your car) wears down over time, and thus can fail. Properly designed, a non-moving part will not fail under its design loads, but the more parts that there are, the more chance that there will be some flaw in design, manufacture or assembly. The software analogy of a part, in modern languages, is a semicolon; every programmatic statement in languages like C/C++, Java, and Perl terminates with a semicolon. The software analogy of a moving part is code that can change its behavior as circumstances (such as data) change, or code only exercised under uncommon circumstances (and thus that might not be reached by test cases). The number of parts and particularly of "moving parts" in software is far, far larger than any mechanical system, and thus software is inherently more prone to failure than mechanical systems.

This complexity can be mitigated in many ways. These include creation and reuse of standard components for standard tasks or entities (and the consequent multiple cycles of refinement), use of standardized ways of doing common tasks, encapsulation and abstraction, proper requirements gathering and test case development/execution, rigorous unit testing that reaches every branch, automated code analysis, and good logical design practice. The three very powerful tools that have arisen from various combinations of these mitigation techniques and tools are best practices for application design, standardized design patterns, and object-oriented coding practices.

Sadly, in the real world, these practices are more honored in the breach. In part, this arises, in software developed by companies, from the fact that most computer people are magicians, who don't understand these tools, or bureaucrats, who don't want to pay for using them. Proper coding is expensive, and it's often difficult to convince people that it's easier and cheaper to design and code correctly once, than to redesign and recode several times.

In academic and open source software, which attracts vastly more artists, the underlying code is often wonderful, while the interfaces and reports are miserable and the software is incredibly difficult to use. I have actually heard university-based programmers say that if you can't understand their software's interface, the problem is yours, rather than the interface's. If the measure of software's utility is how widely it's used within its problem domain, academic software tends to be the least useful code written.

Of particular concern to businesses, heavyweight development methodologies are expensive, because they assume that people will make mistakes, and mitigate this tendency by making people do sufficient verification work before coding to (theoretically) ensure that mistakes are caught. Agile development is much cheaper, but only works if your people are in the top few per cent of the industry (which makes them more expensive to employ, of course), your development is done in-house, and you have good or at least well-understood business processes already in place.

Building software to do more than you need today is more expensive than building software for only your current needs. Building services and libraries saves you in the long run because you only write them once. The practical upshot is that most managers tend to want to use agile development methods even though their staff is incapable of doing so, or their business processes are immature; and most managers tend to want to ignore reusable code because their budget and schedule are based on this project, not the next one.

But academically-developed software, and much open source, is built by programmers for programmers with very little attention to usability. In some cases, such as for code libraries or faceless servers, this works very well. In others, such as for finished desktop applications, it often works very, very badly. Software that is perfect, but unusable, is not any better (except for strip mining the base code) than software that is imperfect but usable. In many ways, perfect base code with a lousy interface is worse than bad base code with a usable interface.

It is possible to build good software. But it's not common.

Posted by jeff at February 11, 2007 6:08 PM

Trackback Pings

TrackBack URL for this entry:
http://www.caerdroia.org/MT/mt-tb.cgi/2418

Comments

First question: why are so many user interfaces so bad? Answer: basically, hubris. Designers lack observational skills or believe that what they can come up with is better than what users are accustomed to doing. They may well be right but, ultimately, the usability of software is scored by the user and not the designer.

Second question: why does software have so many bugs? Lots of reasons but one of them is that the dominant software company (Microsoft) has grown from a basement to an industry giant at least in part by exploiting the 90-90 rule. They've successfully avoided the costly 10% by never implementing it. Every released Microsoft product is actually a beta (without beta support). That ripples through the software industry. Nowadays people have low expectations.

Posted by: Dave Schuler [TypeKey Profile Page] at February 15, 2007 9:57 AM

Does hubris play a part in bad interface design? Sure. But how do you account for corporate software where the interface is designed by the users to all intents and purposes, with the programming team just implementing it? More important than hubris, I think, is that very few people hire artists or GUI experts to design screens: they usually use either the users or generic programmers to do that. Indeed, one of the Mac's strengths for many years (less so now) was Apple's graphic design experts who created their GUI.

As to MS and the 90-90 rule, expectations certainly come into play, here. But again, that's more true for commercial software produced by a company for users not from that company than it is for open source or for internally-developed corporate software. Yet both of those categories are as bug-prone as other software.

I think a part of it is our programming languages are not terribly suitable to either GUI design (with the exception of Apple's Interface Builder and freeze-dried objects), or both to expressing business rules and modeling behavior. Object-oriented languages do a great job with model objects, but a terrible job extending that paradigm to the database (with the exception of Apple's recent forays into integrating entity-relationship modeling with code generation into their development environment, though more work is needed here, too) or representing complex business logic. But object-oriented languages do a terrible job of representing verbs at the design level: they're only good at nouns. What is needed is a constraint-, workflow- and rule-oriented language to write the controllers.

Anyway, the point is, there are a huge number of advances that have to come to make software design and implementation easier, if software is ever going to become generally reliable and easy to use, and I think those advances are less in human nature than in our tools (theoretical and practical).

Posted by: Jeff Medcalf at February 15, 2007 6:34 PM

Part of the problem in user interface design is the CUA-SAA model we've been saddled with for the last 25 years or more. That CUA-SAA may be intuitive (in the sense of learned) without being intuitive (in the sense of suited to the task) doesn't seem to occur to anyone.

My own experience with development tools, generally, is that they make competent programmers more productive and don't do much for the non-technical whatever the advance publicity may be. Unfortunately, the number of non-programmers doing software development is large and rising. Training in a tool doth not a programmer make.

I once heard a good characterization of programming languages: a good programming language should make common tasks trivial and difficult tasks possible. The trend over the last 20 years or so IMO has been in the opposite direction.

My first exposure to object-oriented languages was 35 years ago with Simula. When OOL's raised their heads again about 20 years ago, my reaction was that they were, basically, the opposite of what was really needed since they placed a premium on what was already scarce and, because it was possible to conceal so much, actually made what most programmers spend most of their time doing more difficult. I see little reason to change that observation.

Part of the problem, too, is that, at least in commercial software, there's a sort of reverse-Darwinism: the really fit never survive. 20 years ago there was a REALLY object-oriented language called Actor. ADA has never really gotten outside of government work. C++ is what its name says: a retro-fit. I've never used C# because I've considered learning it a waste of my time. Java is cool but misused: too many COBOL programmers are coding in Java (or C++) and I think that's a recipe for disaster.

Posted by: Dave Schuler [TypeKey Profile Page] at February 16, 2007 1:13 PM