The Software Development Productivity Benchmarking Guide

DCG Software Value

If you missed the news, DCG Software Value, LEDAmc, and TI Métricas recently announced the release of “The Software Development Productivity Benchmarking Guide.” 

It contains actionable benchmarking guidance and information that will enable organizations to track their progress both internally and against industry standards to facilitate the creation of high quality software and improve resource and budget management.

The guide is available to all international software metrics organizations, including IFPUG, ISBSG, NESMA, and COSMIC, as well as to any independent company that is interested in implementing or improving its benchmarking practice.

Don't miss out on this resource; it's free and available for download here

Written by Default at 05:00

Successful Software Deployment Strategies

On October 25, DCG Software Value will co-host the webinar “Successful Deployment Strategies – From Software Sizing to Productivity Measurement” with CAST. Presenters, Philippe Guerin from CAST and Mike Harris from DCG Software Value, will examine how effective quality benchmarking and productivity measurement translate into successful transformation initiatives that cost less and de-risk your IT organization.

Philippe Guerin & Mike Harris

As a senior consultant with a broad expertise in all phases of the SDLC, Philippe Guerin has significant experience leading both technology and organizational transformation initiatives in complex global environments, especially around productivity measurement and improvement programs.

Mike Harris, CEO of DCG, has more than 30 years of broad management experience in the IT field, including periods in R&D, development, production, business, and academia. He is an internationally recognized author and speaker on a range of topics related to the Value Visualization of IT and is considered a thought leader in the international software development industry.

Successful Deployment Strategies

Attendees will walk away from this webinar with broadened knowledge around successful deployment processes, including how portfolio visibility can help manage risk, complexity, and architectural quality.

Learn how to introduce proactive measurements to detect structural quality and risk and vendor / ADM team output before transformation, monitor key performance indicators during, and continue to optimize applications by establishing performance improvement and cost reduction goals.

Philippe and Mike will also address how to:

  • Monitor, track, and compare ADM teams’ utilization, delivery efficiency, throughput, and quality of outputs
  • Detect portfolio outliers, compare against competitors, identify improvement opportunities, and track the evolution of size, risk, complexity, and quality
  • Increase management's visibility of risk, quality, and throughput through enhanced Service Level Agreements 

Register Now 

IT leaders across all industries are invited to attend this 30-minute webinar exploring best practices in software sizing and measurement. Register now to join Philippe and Mike on October 25 at 11:00am EST.

Written by Default at 05:00
Categories :

How Would I Know How Badly We Are Losing Out Through Sub-Optimal Software Development?

Scope of this Report

Every company wants to maximize its profits while meeting its customer expectations. The primary purpose of software delivery is to provide a product to the customer that will validate a business idea, and ultimately provide value to the end-user. There must be feedback between the customer and the business, and this iterative process must be performed quickly, cheaply and reliably.1 The real question is how does an organization know whether its software delivery is performing at optimal levels?

This report considers the following topics:

  • What is sub optimal software development?
  • How would you know if your performance is sub optimal?
  • How do we measure for optimal development?

What is Sub-Optimal Software Development?

The purest definition of sub-optimal is “being below an optimal level or standard”. However, in the information technology (IT) organization, the development life cycle is characterized by multiple facets, each having its own ‘optimal level or standard’. Sub-optimal software development delivers less value than possible. Unfortunately sub-optimal, like beauty, is in the eye of the beholder and therefore can be very different based on context.

A sub-optimal development lif cycle is generally characterized by one of more of the following: cost overruns, poor time to market, excessive and/or critical defects, or low productivity. To any particular organization, one or more of these factors would signal sub-optimal software development.

How Would You Know if Your Performance is Sub-Optimal?

A sub-optimal software delivery process can manifest itself in a variety of ways. The most evident from an external perspective is customer satisfaction whether this is based upon an emotional response to the success or failure of the delivered software or an objective assessment based on testing or value for money. Mediocre quality or time to delivery will surely cause a response on the part of the consumer or client regardless of the reasons for the mediocrity. However, from an internal perspective, it has been observed that there are at least three scenarios that apply.

First we have the organization that doesn’t recognize that their software delivery process is sub-optimal. An example of this would be a company who is experiencing a solid bottom line and reasonable customer satisfaction. Or, a company leading the current technology wave and doesn’t see an immediate decline in their near future. In either case, while they may not be ‘sub-optimal’ in the usual sense of the meaning, they may not be the ‘best that they can be’. Since the first step towards improvement is to recognize there is an improvement to be made, there is a level of awareness that much be reached before this type of organizational environment can progress. In these two companies, the “awareness” may be to simply gain or keep the competitive advantage.

This next dynamic shows itself, not so subtly, when management doesn’t really want to address the software delivery process at all; they simply want the software delivered when they want it delivered.

How many times have we seen a situation where senior management has requested a software solution that has a fixed delivery date already attached to it? And they really aren’t interested in any type of response from the project manager unless they are told what they want to hear. In this type of management environment, the IT organization doesn’t invest much time in their delivery process because they don’t realize the power of good governance as a vehicle to properly manage the project and/or their customer’s expectations.

This third perspective involves an organization that wants to improve its software delivery capability but is unwilling or unable to make the resource investment to make necessary improvements.

Experience shows that lifecycle contributing factors can be numerous: an unrealistic schedule, ambiguous user requirements, the availability of appropriate resources, excessive defects and/or testing, etc. However, in this scenario, the visible issues may seem overwhelming or the root causes too obscure to be able to come up with a viable solution.

Regardless of how the sub-optimal performance manifests itself, the best way to determine if it exists or how pervasive it is, is by executing a benchmark. According to Financial Executive, benchmarking is one of the highest used and most successful management tools used by global senior executives. The purpose of a benchmark is to improve decision making and resource management in order to quantifiably impact a company’s bottom line.

“Benchmarking is the process through which a company measures its products, services, and practices against recognized as leaders in its industry. Benchmarking enables managers to determine what the best practice is, to prioritize opportunities for improvement, to enhance performance relative to customer expectations, and to leapfrog the traditional cycle of change. It also helps managers to understand the most accurate and efficient means of performing an activity, to learn how lower costs are actually achieved, and to take action to improve a company's cost competitiveness.”2

According to C.J. McNair and Kathleen H.J. Leibfried in their book, “Benchmarking: A Tool for Continuous Improvement”, some potential "triggers" for the benchmarking process include:

  • quality programs
  • cost reduction/budget process
  • operations improvement efforts
  • management change
  • new operations/new ventures
  • rethinking existing strategies
  • competitive assaults/crises

Any of the triggers above could certainly have been influenced by sub-optimal development.

In IT benchmarking, there are several options, any of which would be appropriate in addressing sub-optimal performance depending on the causal analysis desired. Horizontal benchmarking (across multiple teams), vertical benchmarking (across certain processes or categories), infrastructure benchmarking (data centers, networks, end-user support) or strategy benchmarking (information technology strategy or business-technology alignment) are some of the types used. The most common benchmark is the ADM software development benchmark.

The key factors that are addressed in an ADM benchmark are cost, time to market, quality and productivity. While benchmarking will identify good practices and successes, it is most beneficial at highlighting sub-optimal activities. These include inefficiencies and problems in methodology, staffing, planning, productivity, cost or capability across sizes, types & technologies. In addition, improvement actions are proposed. In many cases, a benchmark can also identify the probability of successful delivery against time, budget & quality targets – and propose alternative scenarios with higher likelihood. Other measures typically provided are your internal rate of return/ return on investment (ROI) and estimated tech debt – how much is being spent on the ratio of development to maintenance. Either can be key indicators of sub-optimal performance.

There are other benefits of benchmarking (listed below). Benchmarking …
... provides an independent, measurable verification of a team’s capability to perform against time and cost parameters by an objective 3rd party;

... signals management's willingness to pursue a philosophy that embraces change in a proactive rather than reactive manner;

... establishes meaningful goals and performance measures that reflect an external/customer focus, foster innovative thinking, and focus on high-payoff opportunities;

... creates organizational awareness of competitive disadvantage; and

... promotes teamwork that is based on competitive need and is driven by concrete data analysis, not intuition or gut feeling.

In summary, benchmarking would:

  • show you how you stack up against others and how you are performing internally;
  • act as a catalyst for change by setting realistic improvement targets; and
  • provide a path for the company toward optimal software practices.

How Do We Measure for Optimal Development?

Ultimately, the benchmarking exercise is to enable executive management to improve the performance of your software development using data-driven decisions to prioritize improvements. While the benchmark provides an evaluation of existing methods and outcomes against industry standard best practices, it also produces a gap analysis, producing recommendations for maximum return on investment (ROI).

The next step in the solution to software process optimization includes using a structured road map which includes the development of strategic goals based upon the benchmark results, and team analytics that are mapped to and support the strategic goals. From the road map exercise, scorecards and dashboards are developed for feedback to management.

The scorecard will combine an organization’s overall business strategy with the strategic goals set by the road map. These factors and the targets associated with them will reconcile to the desired state and alerts can be set up to identify situations whereby targets will not be met to enable future proactive actions. Generally, the scorecard is used to focus on long term solutions.

The formal definition of dashboards includes the identification and management of metrics within an interactive visual interface to enable the continual interaction and analysis of data. A dashboard is suited to a shorter cycle, or snapshot approach by providing varying types of visualizations to enable quicker decision making by showing charts, graphs, maps and gauges, each with its own metrics. A dashboard may be the visualization of the scorecard, or there may be hybrids of both.


Benchmarking and metrics modeling are the primary tools in recognizing and addressing sub-optimal development delivery to enable a company to become or stay competitive. By using the road map approach, the measurement and presentation of data for management use is key to recognizing and supporting optimal development processes.


  • 1Dave Farley on the Rationale for Continuous Delivery; QCON London 2015, March 8, 2015.
  • 2Reference for Business,
  • “Benchmarking: A Tool for Continuous Improvement,” C.J. McNair and Kathleen H.J. Leibfried, 1993.
  • “Measuring Success - Benchmarking as a Tool for Optimizing Software Development Performance”, DCG Software Value (formerly David Consulting Group), 2015.  “A Closer Look at Scorecards And Dashboards”, Lyndsay Wise; April 27, 2010.
  • “Why Can’t We Estimate Better?” David Herron & Sheila P. Dennis, 2013.
Written by Default at 05:00

Benchmarking Survey

Thanks from DCG!

We're collecting information on the use of benchmarking in organizations and we need your help! Please take 2 minutes and answer 5 quick question in our survey:

  • Does your company use benchmarks currently?
  • How often does your company get new benchmarks?
  • What benchmark vendor does your company use (or used previously)?
  • Which department in your company deals with the baseline vendor?
  • Are you familiar with any benchmark vendors? If so, which ones?
  • What are your personal feelings about the use of benchmarks?

If you're interested in the results, we're happy to share! Just email Mike Harris for more information.

Take the survey here.

Written by Default at 05:00

Join DCG for a Benchmarking Training Class


We're excited to announce that we are offering public training classes with international benchmarking expert, Bram Meyerson, founder and CEO of QuantiMetrics, a DCG partner organization.

We'll be holding one class on May 14 in Seattle (perfect for those attending the CMMI Institute Conference, which takes place May 12-13 in Seattle!) and a second on May 18 outside of Philadelphia.

The classes will explore historic or retrospective benchmarking practices. Bram will also discuss how benchmarking can be used to mitigate risks inherent in optimistic plans or proposals from in-house teams or external suppliers.

Register by March 31st for the Early Bird Discount!

More details about the class - and registration - are available here. If you have any questions, just let us know!

Register Now.

Written by Default at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!