Metrics to measure Application Maintenance and Management

In response to some questions about how software development organizations are really measuring and managing their application development, the David Consulting Group conducted a survey in October 2010 to get some input.  The responses have been analyzed in the AM/AD Survey Report. Frankly, we found that more organizations rely on what I would describe as commercial approaches to management than scientific approaches based on measurement. A couple of examples:

  • The most frequently selected response suggest that organizations fix Application Maintenance resources levels by deciding the budget and then working out how much they can do with that budget.  The implication is that this budget decision is only peripherally influenced by the size of the problem at hand.
  • The most frequently selected priority for Application development projects was time to market and the most preferred way to manage productivity was through aggressive schedules.  Any good software estimation tool with monte carlo analysis (e.g. SEER fro Software from Galorath) will reveal that reducing schedule time below an optimum point has the opposite effect of decreasing productivity.  it also proportionally increases budget and risk!

There is clearly a lot of room for improvement out there.

Written by Michael D. Harris at 11:23

Automated Code Reviews

During a review of CAST Software's latest release of it's Application Intelligence Platform (AIP), they revealed the capability to automatically create fix lists for code based on an automated code review.  Line items on the fix lists do not get removed until the source code shows that it doesn't have that defect any longer. So what?  Implementing CAST AIP in the development process facilitates an automated code review every time the work-in-progress code is checked in. There is plenty of evidence, including papers from DCG, that code review is a software development best practice that can be readily cost-justified.  However, many software development groups still do not do it or they do not do it properly.  Why not?  Two main reasons: Either management do not believe the evidence and only see the cost as being an overhead that can be easily removed or the developers are under such schedule pressure that they scrimp on the code reviews (and salve their consciences by assuring themselves that their colleagues are good buddies who probably don't make mistakes). Where does CAST AIP come in?  One of CAST's strengths is the analysis of source code to identify code bad practice.  It has been possible for some time to implement CAST to do, say, a code review of an application every night.  One of the resulting reports allows a  team lead or architect to review the code defects and move them to a fix list (actually called an "Action List" in CAST) for the developers.  This has been a powerful tool for some clients but it has also generated the feedback that the team lead or architect has too much to do besides worrying about this manual intervention (sound familiar).  Clients have said that some defects just obviously need to be fixed, can't they be automatically added to the Action List? Well, yes, with AIP Version 7.0, they can.  Given the nightly reporting, those automated actions will keep appearing on the fix list until the defects are removed from the code. So we have automated code review with guaranteed implementation follow-up.  This would solves one of the key weaknesses of some of the best development team leads and architects that I know!

Written by Michael D. Harris at 17:09
Categories :

Watts Humphrey

I just heard the sad news that Watts Humphrey has died.  It is rare in life to be able to say that you wouldn't, maybe even couldn't, be doing what you are doing today if one person had not done their thing years earlier.  That is certainly true for me, and many of us at DCG, where Watts is concerned.  May he rest in peace.

Written by Michael D. Harris at 10:42
Categories :

SEI Report: Correlation between Metrics use and CMMI Appraisals

The SEI have just published a new report by Goldenson and McCurley. The report shows an interesting correlation between what high maturity organizations themselves report about uses of measurement and what appraisal results say about the organizations.

Written by Michael D. Harris at 18:20

Estimate Early; Estimate Faster with SEER Estimate by Comparison

The SEER estimate by Comparison capability is something that I talk about a lot to company's who are really struggling because their SWAG approach to estimating projects is neither repeatable nor coherent.  Often, the projects they are failing miserably to estimate professionally are mission-critical either for deliverables to customers or for internal investment (budget) planning. At some point, these estimate amateurs may need to base their estimates on a repeatable sizing approach (of which Function Points is just one option) but they need to change their ways quickly so I often talk to them about doing estimates by comparison using SEER. Even if you don't see yourself buying a tool right now, I recommend investigating this approach so you can attend in real time or review later.  The technique is powerful even if the tool is not right for you.  DCG can also provide the tool as a Service so you can pilot the application or just use it a few times every year for this really important estimates.  For more information on this service, you can contact David Herron at The Estimate by Comparison application has traditionally been used to empower users to develop an understanding of software size; the single most significant driver of development cost, effort, and schedule for software projects.  However, SEER Estimate by Comparison has evolved and can be utilized with all of the SEER solutions to provide an insight into effective definition of scope through a series of project analogies and/or comparisons to a user's repository of past projects, thus helping users to develop a reliable estimate on a project's scope even when information is scarce. SEER Estimate by Comparison further adds capability to the project team when used in a more contemporary manner. It can be manipulated in such a manner that a wide realm of subjective, qualitative alternatives can be evaluated in context of the project as a whole in a robust, repeatable and ultimately measureable manner.

Written by Michael D. Harris at 13:15

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!