I must admit that I rarely use the Standish Chaos figures on what percentage of software projects fail in respect of their original estimates of cost, time and functionality.Â This may seem odd.Â Surely, they support the business case for using consultants who specialize in improving software development?Â Well yes but they never quite correlate with my experience and, perhaps as a result, the latest percentage never quite lodges in my memory. My ownÂ disquiet has always been based on the feeling that the statistics derived from any such survey may be more correlated to the type of questions asked than the reality on the ground.Â For example, when a project finally reaches that point where the customers admit that they really didn't know what they wanted when the project started, is that a project failure?Â Probably it counts as a "yes" but I would argue that it was a controllable fault of the software developers or their process. Maybe that's where the Standish publishers are going with it. Anyway, I was pleased to see my concerns validated in an article by J. Laurenz Eveleens and Chris Verhoef in the January/February 2010 issue of IEEE Software.Â With significant statistical analysis using reputable sources, the authors identify the following four problems with the Standish Group figures in their Chaos reports:
- "They're misleading because they're solely based on estimation accuracy of cost time and functionality."
- The Standish estimation accuracy definitions are not sound.
- If the Standish definitions are used to drive projects, they may cause large cost and time over-estimations.
- The Standish figures cannot be extrapolated across organizations because large forecasting biases exist in any given organization which make extrapolation, in the authors words, "meaningless."