Enhancing the Efficiency and Effectiveness of Application Development

Along with some of our regular readers, I was delighted and slightly surprised to see that an article with the above title (see reference information and link at the bottom of this post) made it to #1 on McKinsey’s list of the most popular articles with readers of its website in the third quarter of 2013.

Of course, there is nothing new in the article (at least we don’t think so at DCG), but the authors draw a bold conclusion from their research into the best productivity metric to use for application development:

“Although all output-based metrics have their pros and cons and can be challenging to implement, we believe the best solution to this problem is to combine use cases (UCs)—a method for gathering requirements for application-development projects—with use-case points (UCPs), an output metric that captures the amount of software functionality delivered.”

Now, here at DCG, we are interested in helping our clients improve their metrics (or even to begin their own metrics practice). Hence, we adopt a pretty broad philosophical approach to where our clients are today on the maturity curve and where they should realistically aim to be in the near future.  Hence,

  • We greatly prefer any well-defined software size metric over no software size metric.
  • We greatly prefer UCPs over lines of code (LOCs). 
  • We marginally prefer UCPs over size metrics that have been designed internally by one organization. 
  • We prefer function points over UCPs.

The main challenge with UCPs is that there is no standard way to count them, so they are quite prone to the implementation in one organization and the discipline the organization applies to consistency of counting.  “Consistency” of sizing is the foundation of any software metrics. If you use a sizing metric that does not come with a history and methodology that reinforces consistency, then it can and will undermine your results and decision making. This becomes an issue when the organization comes to use the productivity metrics by comparing different parts of the organization against each other, and most organizations usually want to benchmark themselves against other organizations.  None of this can be done reliably with UCPs.

McKinsey Sizing Comparison ChartTo give the authors their due, the well–researched chart that they include in a side bar in their article (reproduced above) highlights this problem in the “credibility” line. In producing their recommendation, it seems that they have not given this line of their analysis as much weight as other lines. Indeed, I would argue that the “minimal overhead in calculating” assessment is based on out-of-date information so far as function points are concerned, but, in any case, the effort saved is surely wasted if the results are not credible.

Source: McKinsey & Company, “Enhancing the Efficiency and Effectiveness of Application Development,” Michael Huskins, James Kaplan, Krish Krishnakanthan, August 2013.

Written by Michael D. Harris at 05:00
Categories :

0 Comments :

Comment

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!