When testing becomes obsolete

Whenever a client comes to me asking how long it would take to write a piece of software, he says something along the lines of “I thought it would be much faster than that to create it”. I then feel compelled to explain that testing takes a lot of time.

This is not a new trend, but it has become more and more frequent to skip the tests : too long to make, too expensive… While I can understand that people want stuff now, it always strikes me as odd that people don’t want perfect stuff. The key is to find the balance. And this balance has been gradually shifting towards the now (and the cheap) rather than the quality.

Last month, Apple decided to postpone the release of Leopard, because it was not ready. Quite frankly, I felt relieved. But it has become the exception, and not the rule… When I was in Spain last week, we had to test for the first time a critical piece of software, on the show floor! Needless to say it was hell. We ended up going back to the old, and above all tested, methods. I cannot begin to imagine how much money was wasted, just because someone in the chain didn’t want to invest a little more upfront.

Maybe it’s just me… I feel ashamed when people show one of my products that crashes or isn’t polished enough, especially when it’s been paid for. On the Mac platform, expectations are very high, which means that most software either works or is quickly forgotten. It forces us to make sure everything is as perfect as possible before releasing anything. But we can’t test everything in a short period of time (do you hear me, mr cmd-shift-alt-k + F3?). Does this mean end-users have to do the beta testing?

The funny (?) part is that this behavior only applies to computers. When you get a car, a piece of furniture, or some food, you want perfection. Just imagine your brand new car, coming out of the plant, without any headlight or steering wheel, and the salesman handing over the key saying “don’t worry, when we finish hammering out our production problems, you’ll get your car upgraded… (dramatic pause) for free!”.

Why is everyone ready to disregard some major usability flaws in a computer program, and not in any other appliance? Because we know that computers are glitch-prone, anyway? Because it’s not that expensive, compared to a house, or a car?

For myself, I think it’s because of economic reasons. We developers compete against each other quite frequently. One way to get the customer is to make it cheaper. If it’s cheaper, something has to go, and that’s usually thorough testing. But why does the client accept this? Maybe because he chose willingly the cheap instead of the good, and that, either by himself or after being explained to by the developer, he understands that you can’t have everything for nothing.

But why does that continue? After all, you buy something that doesn’t work, you don’t make the same mistake twice, right? Except that once the client has paid cheap for something “usable”, it seems “natural” to continue paying cheap. You get used to mediocrity. “You call that hard to use? you should’ve seen the version 0.1… THAT was clearly buggy”. True, but you still paid for it.

  

Leave a Reply