How can I establish a software vendor management system?

Scope of Report

This month’s report will focus on two key areas of vendor management. The first is vendor price evaluation which involves projecting the expected price for delivery on the requirements. The second is vendor governance. This is the process of monitoring and measuring vendor output through the use of service level measures.

Vendor Price Evaluation

Vendor Price Evaluationseeks to enable pricing based on an industry standard unit of measure for functionality that puts the buyer and seller on an even playing field for pricing, bid evaluation and negotiation.

Organizations leverage third party vendors for the development of many of their software initiatives. As such, they are continuously evaluating competing bids and looking for the best value proposition.

Being able to properly size and estimate the work effort is critical to evaluating the incoming vendor bids. Furthermore, an internally developed estimate provides a stronger position for negotiating terms and conditions. The effective delivery of an outsourced project is in part dependent on an open and transparent relationship with the vendor. A collaborative estimating effort provides for greater transparency, an understanding of potential risks, and a collective accountability for the outcomes.

To better control the process, an economic metric is recommended to provide the ability to perform true value analysis. This metric is based on historical vendor spending over a diverse sampling of outsourced projects, thus creating an experiential cost-per-unit “baseline”. Knowing the cost-per-unit price gives you leverage in negotiation. Instead of using hours billed as a fixed price measurement, you know the functional value of deliverables which allows billing on a per unit delivered basis.

To achieve this, we recommend the use of function points as a measure of the functional size of the project. Function Points (FPs) provide an accurate, consistent measure of the functionality delivered to the end user, independent of technology, with the ability to execute the measurement at any stage of the project, beginning at completion of requirements. Abundant function point-based industry benchmark data is available for comparison.

By comparing historical cost-per-FP to industry benchmark data, organizations can quickly determine whether or not they have been over- (or under-) spending. Under-spending may not seem like a problem but under-bidding by vendors is an established tactic to win business that may not represent a sustainable price. If forced to sustain an unrealistic low price, vendors may respond by populating project teams with progressively cheaper (and weaker) staff to the point where quality drops and/or delivery dates are not met. At this point, having great lawyers to enforce the original contract doesn’t help much.

Implementing this approach provides an organizational methodology for bid evaluation and a metric for determination of future financial performance.

Vendor Governance

The key to a successful vendor governance program is an effective set of Service Level Agreements (SLAs) backed up with historical or industry benchmarked data and agreement with the vendor on the SLAs.

The measures, data collection, and reporting will depend on the SLAs and/or the specific contract requirements with the software vendor. Contracts may be based strictly on cost-per-FP or they may be based on the achievement of productivity or quality measures. A combination can also be used with cost-per-FP as the billable component and productivity and quality levels used for incentives and/or penalties.

There are a number of key metrics that must be recorded from which we can derive other measures. The key metrics commonly used are: Size, Duration, Effort, Staff, Defects, Cost & Computer resources. For each key metric, a decision must be made as to the most appropriate unit of measurement. See the appendix for a list of key metrics and associated descriptions.

The service level measures must be defined in line with business needs and, as each business is different, the SLAs will be different for each business. The SLAs may be the typical quality, cost, productivity SLAs or more focused operational requirements like performance, maintainability, reliability or security needs within the business. All SLAs should be based on either benchmarked or historical data and agreed with the vendor. Most SLAs are a derivative of the base metrics and thus quantifiable.

One output measure to consider adding is business value; fundamentally, the reason we are developing any change should be to add business value. Typically, business value isn’t an SLA but it can add real focus on why the work is being undertaken and so we are now recommending it. The business value metric can be particularly helpful in the client-vendor relationship because it helps to align the business priorities of the client and the vendor (or to highlight any differences!).

The key is to define the measures and the data components of the measures prior to the start of the contract to avoid disputes during the contract period.

Reporting

Measurement reports for vendor management are typically provided during the due diligence phase of vendor selection and during the execution of the contract. During due diligence, the reports should provide the vendor with the client expectations regarding the chosen measures (e.g. Cost-per-FP, hours-per-FP, etc.). During the life of the contract, reports should be produced to show compliance to contract measures and to aid in identifying process improvement opportunities for all parties.

The typical reporting for vendor management consists of balanced scorecards for senior level management, project reports for project managers, and maintenance reports for support areas.

Balanced scorecard

These reports provide a complete picture of all measures required for managing the contract. These are typically summary reports that include data from multiple projects. The Balanced Scorecard Institute states that, “the balanced scorecard was originated by Robert Kaplan and David Norton as a performance measurement framework that added strategic non-financial performance measures to traditional financial metrics to give managers and executives a more 'balanced' view of organizational performance”.

In the case of software vendor management, the scorecard should have multiple measures that show contract results. For example, even though productivity may be a key ‘payment’ metric, quality should also be included to ensure that in efforts to improve productivity, quality does not suffer. The report should also include a short analysis that explains the results reported to ensure appropriate interpretation of the data.

Project reporting

These reports focus on individual projects and are provided to project teams. The reports should contain measures that support the contract and provide insight into the project itself. Analysis should always be provided to assist teams with assessing their project and identifying process improvement opportunities to better meet the contract requirements.

Maintenance reporting

These reports are at an application level and would be provided to support staff. This data would provide insight into the maintenance/support work being conducted. Again, this would be in support of specific contract measures, but it can also be used to identify process improvement opportunities and/or identify which applications may be candidates for redesign or redevelopment.

Data Definition and Collection

Data definition and collection processes need to be developed to support the reporting. As stated in the book, IT Measurement – Practical Advice from the Experts, this step should, “focus on data definition, data collection points, data collection responsibilities, and data collection vehicles”. Who is going to collect the data? When it will be collected? How it will be collected? Where it will be stored? These are important questions to drive the implementation of the contract measurements but these all depend on the most difficult step, data definition.

Data definition involves looking at all of the data elements required to support a measure and ensuring that both the client and the vendor have the same understanding of the definition. Since most software vendor contracts utilize productivity (FP/effort), this report will focus on defining the data elements of FPs and effort by way of example.

Function Point Data Definition

Function point guidelines should be developed for all parties to follow. This should include which industry standard will be used (e.g. International Function Point User Group Counting Practices Manual 4.x) as well as any company specific guidelines. Company specific guidelines should not change any industry standard rules, but provide guidance on how to handle specific, potentially ambiguous situations. For example, how will purchased packages be counted -- Will all functions be counted? Or will just the ‘customized’ functions be counted? Another consideration is how changes to requirements throughout the lifecycle will be counted. For example, some organizations count functions one time for a project unless a changed requirement is introduced late in the life cycle (e.g. system testing). Then a function may be counted more than once. Guidelines need to be established up front for as many situations as possible, but may need to be updated throughout the life of the contract as new situations arise.

Effort Data Definition

Effort can be one of the more contentious data elements to define in software vendor management systems. It is important to determine what life cycle activities are included in each aspect of the software vendor contract. For instance, if productivity is an SLA or a payment incentive, then vendors will want to exclude certain activities that clients may want to include. One example is setting up a test environment for a project. A vendor may want to exclude this from the productivity calculations while a client may think it should be included. A ‘rule of thumb’ is that if an activity is required for the project specifically the effort should be included. If the activity is to set up something that is for all projects to use, then it should be excluded. So in the test environment example if the vendor is setting up scenarios or simulators to test specific project functionality the effort should be included as part of the project productivity calculation. If the vendor is installing servers to host test data and tools, the effort should be excluded. There are more effort categories to examine than can be included in this report. A non-category issue decision with effort is the inclusion or not of “overtime” hours. The recording of overtime hours in time management systems tends to vary widely even within organizations because many software development employees are not paid for overtime hours. The important thing is for vendors and clients to work together to define and document the guidelines.

Code Quality Analytics

In addition to the standard SLAs and beyond functional testing, code analytics and an application analytics dashboard can provide an IT organization with key insights into code quality, reliability and stability of the code being delivered by the vendor.

Code analytics tools, such as those provided by CAST Software, analyze the delivered code to detect structural defects, security vulnerabilities and technical debt. The metrics generated by these tools can be used as SLAs.

There is value in understanding what is being developed throughout the lifecycle. In this way security, performance and reliability issues can be understood and addressed earlier while still in development.

In a waterfall development environment, code analytics can be executed at defined intervals throughout the lifecycle and after deployment to production. In an Agile framework, code analytics can be run as part of each code build, at least once per sprint, and code quality issues can be resolved real time.

Having this information early in the lifecycle enables fact-based vendor management. Code analytics, along with traditional measurements provides the buyer with the information needed to manage their vendor relationships and ensure value from their IT vendor spend.

Conclusion:

A robust vendor management system includes:

  • Pricing evaluation using industry standard measures to promote meaningful negotiations,

  • Service level metrics backed up with historical or industry-benchmarked data and

  • Code analytics to ensure quality, reliability and stability are built into the systems being developed.

    With these components in place an organization can efficiently manage vendor risk, monitor and evaluate vendor performance and ensure value is derived from every vendor relationship.

Sources:

  • Balanced Scorecard Institute Website www.balancedscorecard.org/resources/about-the-balanced- scorecard

  • “IT Measurement Practical Advice from the Experts.” International Function Point Users Group. Addison Wesley (Pearson Education, Inc.).2002 Chapter 6 Measurement Program Implementation Approaches.

  • CAST Software Website Application Analytics Software - http://www.castsoftware.com/

    Project size can be described in several ways, with software lines of code (SLOC) and function points being the most common.

     

    GLOSSARY:

    Function Points

    The industry standard approach to functional size is Function Points (FPs). It is a technology agnostic approach and can be performed at any point of the lifecycle.

    FP analysis provides real value as a sizing tool. Even in software developed using the latest innovations in technology, the five components of function point analysis still exist so function point counting remains a valuable tool for measuring software size. Because a FP count can be done based on a requirements document or user stories, and the expected variance in FP counts between two certified function point analysts is between 5% and 10%, an accurate and consistent measure of the project size can be derived. And because FP analysis is based on the users view and independent of technology it works just as well as technology evolves.

    SLOC

    Source lines of code is a physical view of the size but can only be derived at the end of a project.

    It has some inherent problems, one being that inefficient coding produces more lines of code, another being the fact that determining the SLOC size of a project before it is coded is itself an estimate.

    However, it can be used retrospectively to review a projects performance and you need to consider ESLOC or effective Source lines of code to remove the expert/novice factor of more lines of codes highlighted above.

    Code analysis tools like CAST can provide excellent diagnostics and even FP counts based on the code.

    Story Points

    Projects in an Agile framework typically use Story Points to describe their relative size. They work well within a team but are ineffective at an organization level to consider relative size.

    For example, a team can double their velocity simply by doubling the number of story points they assign to each story. They can also vary from one team to another as they are only relevant to the team, and sometimes, the sprint in question.

    Time (Duration)

    Simply the time measure for completing the project and/or supporting the application. This is calendar time, not effort

    Effort

    Effort is the amount of time to complete a project and/or support an application. Typically, hours is the metric used as it is standard across organizations. Work days or months may have different definitions across organizations.

Effort is one of the more challenging pieces of data to collect and the granularity at which you can analyze your measures is determined by how you record and capture the effort.

In agile teams, the effort is relatively fixed but flexible in the work performed, so if you want to analyze testing performance you need to know the split of testing work and so on.

Quality

Quality is a key measure in a vendor management situation as the quality of the code coming into testing and into production determines how well the project performs. We are all aware of the throw it over the wall mentality when deadlines start to hit and the resultant cost is defects being delivered to production.

A common request is how many defects are expected for a project of a particular size.

The truth is that the answer is not straightforward as many organizations have a different view of what a defect is and how to grade them. Set your criteria with the vendor framework first and then record going forward. A view of historical performance is extremely useful here as well.

The defects should be measured during User acceptance test as well as go-live during the warranty period and used to predict future volumes and identify releases where further investigation or discussion is warranted.

Staff – FTEs

This is the people metric, it is usually measured in FTE or Full-time equivalents so we have a comparable metric, you might have had 20 different people work on a project with a peak staff of 8FTEs or 10 people with the same effort and staffing profile, it’s the FTEs that is consistent and comparable.

There is also person resource type that can be relevant here so consideration to things like onshore/offshore, contractor/permanent/consultant or designer/manager/tester may need to be included.

Cost

This may be actual cost or a blended rate per hour. Where multiple currencies are involved, assumptions may be needed to be fixed about appropriate exchange rates.

Computer Resources

Computer resources covers the parameters of the technology environment such as platform, programming language etc. The final metric captures the “what?” and “how?” to allow to compare against similar project types by language and technical infrastructure.

Written by Default at 05:00
Categories :

Function Points in the Philippines?

David LambertI know what you are thinking – and I was thinking the same thing. Is there really a company using function points in the Philippines? Yes, there is. In fact, I recently traveled to the Philippines to train a lean development team on the use of function points as a measure for their metrics program.

I’d never been to the Philippines before, so it was an interesting experience to be in a new place, but it was also interesting to see how function points are being used around the world.

This particular team is way above the curve as far as their processes and documentation is concerned. They were looking for a standardized process to size their change requests in order to measure productivity and quality and help forecast future projects. Their intent is to start sizing those small change requests and establish some benchmarks for their applications. Once those benchmarks are in place and they have gathered some valuable data, the team intends to start using the data to help size their larger projects. The sizing for those projects will allow them to better manage staffing, cost, and quality related to those projects.

This, of course, is something we encourage and espouse as a best practice for development teams. So, it was great to see that this mentality has really taken hold in this organization, and that they truly understand the benefits of sizing.

I also learned while I was there that several larger technology companies are starting to use the Philippines as a source to locate their infrastructure and development teams. So you may not think of the Philippines when it comes to the IT domain, but the country is starting to make some strides towards closing the gap on the rest of IT world. I know for sure that one lean development team has closed the gap and function points have allowed them to standardize their process even further.

I look forward to seeing how the use of function points continues to develop in the country – and beyond. Is your organization using function points? If not, it’s time to catch up!

 

David Lambert
Managing Consultant

Written by David Lambert at 05:00

The Mathematical Value of Function Points

"Anything you need to quantify can be measured in some way this is superior to not measuring it
all." – Gilb’s Law(1).

To assess the value of function points (any variety), it is important to step back and address
two questions.  The first is “What are function points (in a macro sense)” and secondly “Why do we measure?”

Function points are a measure of the functional size of software. What are IFPUG Function
Points? IFPUG Function Points (there are several non-IFPUG variants) are a measure of the functionality delivered by the project or application. The measure is generated by counting features and functions of the project or application based on a set of rules. In this case, the rules for counting IFPUG Function Points are documented in the IFPUG Counting Practices Manual. Using the published rules, the measure of IFPUG Function Points is a consistent and repeatable proxy for size. Consistency and repeatability increase the usefulness of estimating and measurement. An analogy for the function point size of a project is the number of square feet of a house when building or renovating. Knowing the number of square feet provides one view of the house, but not the other attributes, such as the number of bedrooms. A project function point count is a measure of the function size a project while an application count is a measure of the functional size of the application. 

The question of why do we measure is more esoteric. The stated reasons for measuring often
include:

  • To measure performance,
  • To ensure our processes are efficient,
  • To provide input for managing,
  • To estimate,
  • To pass a CMMI appraisal,
  • To control specific behaviour, and
  • To predict the future.

Douglas Hubbard (2) summarizes the myriad reasons for measuring into three basic categories.  
1. Measure to satisfy a curiosity.
2. Measure to collect data that has external economic value (selling of data).
3.Measure in order to make a decision.

The final reason, to make a decision, is the crux of why measurement has value in most organizations. The decision is the driver to the value of counting function points. The requirements for making a decision are uncertainty (lack of complete knowledge), risk (a consequence of making the wrong decision) and a decision maker (someone to make the decision).  

The attribute of uncertainty is the direct reflection that there exists more than one possible outcome for a decision. Represent the measurement of uncertainty as a set of probabilities assigned to the possible outcomes. For example, there are two possibilities for the weather tomorrow, precipitation or no precipitation. The measurement of uncertainty might be expressed as a 60% chance of rain (from the statement we can infer a 40% chance of no rain). Define risk as the uncertainty that a loss or some other “bad thing” will occur.  In this case, the risk might be that we intend to go picnic if it does not rain and must spend $30 for food the day before that will perish if we can't go on the picnic.  
Measurement of risk is the quantification of the set of possibilities that combines the probability of occurrence with the quantified impact of an outcome.  We would express the risk as a 60% chance of rain tomorrow with a potential loss of $30 for the picnic lunch that won't be eaten. In simplest terms, we measure so we can reduce the risk of a negative outcome. In our picnic example, a measure would have value if it allows us to reduce the chance that we spend $30 for a picnic on a rainy day.

A simple framework hybridized from Hubbard’s How to Measure Anything or determining the value of counting function points to support decision making is:

  • Define the decision.
  • Determine what you already know (it may be sufficient).
  • Determine if knowing functional size will reduce uncertainty.
  • Compute the value of knowing functional size (or other additional information).
  • Count the function points if they have economic value.
  • Make the decision!

The Process and an Example:

1. Define the decision.

Function points provide useful information when making some types of decisions. Knowing the size of the software delivered or maintained would address the following questions:

  • How much effort will be required to deliver a set of functionality?
  • Given a potential staffing level, is a date or budget possible?
  • Given a required level of support, is staffing sufficient?

Summarizing the myriad uses of function points into four primary areas is useful for understanding where knowing size reduces uncertainty.

a) Estimation: Size is a partial predictor of effort or duration. Estimating projects is an important use of software size. Mathematically, the effort is a function of size, behaviour, and technical complexity. All parametric estimation tools, home-grown or commercial, require project size as one of the primary inputs. The simple parametric model that equates effort to size, behavior and complexity are an example of how knowing size reduces uncertainty.
b)Denominator: Size is a descriptor that is generally used to add interpretive information
to other attributes or as a tool to normalize other attributes. When used to normalize other measures or attributes, size is usually used as a denominator. Effort per function point is an example of using function points as the denominator. Using size as a denominator helps organizations make performance comparisons between projects of differing sizes. For example, if two projects discovered ten defects after implementation, which had better quality? The size of the delivered functionality would have to be factored into the discussion of quality.  
c) Reporting: Collect the measures needed to paint a picture of project performance,
progress or success. Leverage measurement data for Organizational report cards and performance comparisons. Use functional metrics as a denominator to synchronize many disparate measures to allow comparison and reporting.
d) Control: Understanding performance allows project managers, team leaders, and project team members to understand where they are in an overall project or piece of work and, therefore, take action to change the trajectory of the work. Knowledge allows the organization to control the flow of work in order to influence the delivery of functionality and value in a predictable and controlled manner.

2. Determine what you already know (it may be enough).

Based on the decision needs, the organization may have sufficient information to reduce
uncertainty and make the decision. For example, if a table update is made every month, takes 10 hours to build and test, then no additional information is needed to predict how much effort is needed to make the change next month. However, when asked to predict a release of a fixed but un-sized backlog, collect more data.

3. Determine if knowing functional size will reduce uncertainty.

Not all software development decisions will be improved by counting function points (at
least in their purest form). Function point counting for work that is technical in nature (hardware and platform related), non-functional in nature (changing the color of a screen) or an effort to correct defects rarely provides significant economic value.

4. Compute the value of knowing the functional size (or other additional information).

One approach to determining whether measurement will provide economic value is to calculate the expected opportunity loss. As a simple example assume a high profile $10M project, estimated to have a 50% chance of being on a budget (or below) and a 50% probability of being 20% over budget.  
In table form:

Mathematical Value of Function Points

The expected opportunity loss is $1M (50% * 2M, very similar to the concept of Weighted Shortest Job First used in SAFe®). In this simple example, if we had perfect information we could make a decision to avoid a $2M over budget scenario.  The expected value of perfect information is $2M. If counting function points and modeling the estimate improves the probability of meeting the budget to 75% then the expected opportunity loss is $500K (a 50% reduction).

5. Count the function points if there is economic value.

Assuming that the cost of the function point count and the estimate is less than the improvement in the opportunity loss, there is value to counting function points.  The same basic thought process is valid to determine whether to make any measure.

6. Make the decision!

Using the reduction of uncertainty make the decision. For example, if the function point count and estimate based on that count reduce our uncertainty that we can meet the estimate by 50% we would be more apt to decide to do the project and to worry less about the potential ramifications to our career. 

Conclusion

While the scenario used to illustrate the process is simple, the basic process can be used to evaluate the value of any measurement process. The difference in the expected gain and the expected value or the percentage not spent on measurement is the value of the function point count. Modeling techniques
such as Monte Carlo Analysis and calibrated estimates are useful to address more robust scenarios in addition to the use of historical data. Counting function points reduces the amount of uncertainty so that we can make better decisions. If this simple statement is true, we can measure the economic value of counting function points.

Sources

1. Demarco, Tom and Lister, Tim. Peopleware: Productive Projects and Teams (3rd Edition). 2013.
2. Hubbard, Douglas. How to Measure Anything. (Third Edition). 2014. Wiley. 

Download

The report can downloaded here.

Written by Default at 05:00

Measurement Roadmap

A complaint we hear a lot in this industry is that IT struggles to prove its value to the business. One of the easiest ways to combat this is to align IT directly with the goals of the business - but how do you do that? The key is measurement.

No shock coming from us, right? Measurement. But, without metrics you can't accurately report on your actions, and you simply can't provide comprehensive insight into how IT supports the business.

Our Measurement Roadmap outlines metrics that match up to the technical and organizational goals of your business, facilitating effective, real-time monitoring and decision support.

Measurement Roadmap

The bottom line is that IT can't prove its worth without a mechanism to measure its worth. Find the right measures for your organization and satisfy all stakeholders.

Written by Default at 05:00

How Would I Know How Badly We Are Losing Out Through Sub-Optimal Software Development?

Scope of this Report

Every company wants to maximize its profits while meeting its customer expectations. The primary purpose of software delivery is to provide a product to the customer that will validate a business idea, and ultimately provide value to the end-user. There must be feedback between the customer and the business, and this iterative process must be performed quickly, cheaply and reliably.1 The real question is how does an organization know whether its software delivery is performing at optimal levels?

This report considers the following topics:

  • What is sub optimal software development?
  • How would you know if your performance is sub optimal?
  • How do we measure for optimal development?

What is Sub-Optimal Software Development?

The purest definition of sub-optimal is “being below an optimal level or standard”. However, in the information technology (IT) organization, the development life cycle is characterized by multiple facets, each having its own ‘optimal level or standard’. Sub-optimal software development delivers less value than possible. Unfortunately sub-optimal, like beauty, is in the eye of the beholder and therefore can be very different based on context.

A sub-optimal development lif cycle is generally characterized by one of more of the following: cost overruns, poor time to market, excessive and/or critical defects, or low productivity. To any particular organization, one or more of these factors would signal sub-optimal software development.

How Would You Know if Your Performance is Sub-Optimal?

A sub-optimal software delivery process can manifest itself in a variety of ways. The most evident from an external perspective is customer satisfaction whether this is based upon an emotional response to the success or failure of the delivered software or an objective assessment based on testing or value for money. Mediocre quality or time to delivery will surely cause a response on the part of the consumer or client regardless of the reasons for the mediocrity. However, from an internal perspective, it has been observed that there are at least three scenarios that apply.

First we have the organization that doesn’t recognize that their software delivery process is sub-optimal. An example of this would be a company who is experiencing a solid bottom line and reasonable customer satisfaction. Or, a company leading the current technology wave and doesn’t see an immediate decline in their near future. In either case, while they may not be ‘sub-optimal’ in the usual sense of the meaning, they may not be the ‘best that they can be’. Since the first step towards improvement is to recognize there is an improvement to be made, there is a level of awareness that much be reached before this type of organizational environment can progress. In these two companies, the “awareness” may be to simply gain or keep the competitive advantage.

This next dynamic shows itself, not so subtly, when management doesn’t really want to address the software delivery process at all; they simply want the software delivered when they want it delivered.

How many times have we seen a situation where senior management has requested a software solution that has a fixed delivery date already attached to it? And they really aren’t interested in any type of response from the project manager unless they are told what they want to hear. In this type of management environment, the IT organization doesn’t invest much time in their delivery process because they don’t realize the power of good governance as a vehicle to properly manage the project and/or their customer’s expectations.

This third perspective involves an organization that wants to improve its software delivery capability but is unwilling or unable to make the resource investment to make necessary improvements.

Experience shows that lifecycle contributing factors can be numerous: an unrealistic schedule, ambiguous user requirements, the availability of appropriate resources, excessive defects and/or testing, etc. However, in this scenario, the visible issues may seem overwhelming or the root causes too obscure to be able to come up with a viable solution.

Regardless of how the sub-optimal performance manifests itself, the best way to determine if it exists or how pervasive it is, is by executing a benchmark. According to Financial Executive, benchmarking is one of the highest used and most successful management tools used by global senior executives. The purpose of a benchmark is to improve decision making and resource management in order to quantifiably impact a company’s bottom line.

“Benchmarking is the process through which a company measures its products, services, and practices against recognized as leaders in its industry. Benchmarking enables managers to determine what the best practice is, to prioritize opportunities for improvement, to enhance performance relative to customer expectations, and to leapfrog the traditional cycle of change. It also helps managers to understand the most accurate and efficient means of performing an activity, to learn how lower costs are actually achieved, and to take action to improve a company's cost competitiveness.”2

According to C.J. McNair and Kathleen H.J. Leibfried in their book, “Benchmarking: A Tool for Continuous Improvement”, some potential "triggers" for the benchmarking process include:

  • quality programs
  • cost reduction/budget process
  • operations improvement efforts
  • management change
  • new operations/new ventures
  • rethinking existing strategies
  • competitive assaults/crises

Any of the triggers above could certainly have been influenced by sub-optimal development.

In IT benchmarking, there are several options, any of which would be appropriate in addressing sub-optimal performance depending on the causal analysis desired. Horizontal benchmarking (across multiple teams), vertical benchmarking (across certain processes or categories), infrastructure benchmarking (data centers, networks, end-user support) or strategy benchmarking (information technology strategy or business-technology alignment) are some of the types used. The most common benchmark is the ADM software development benchmark.

The key factors that are addressed in an ADM benchmark are cost, time to market, quality and productivity. While benchmarking will identify good practices and successes, it is most beneficial at highlighting sub-optimal activities. These include inefficiencies and problems in methodology, staffing, planning, productivity, cost or capability across sizes, types & technologies. In addition, improvement actions are proposed. In many cases, a benchmark can also identify the probability of successful delivery against time, budget & quality targets – and propose alternative scenarios with higher likelihood. Other measures typically provided are your internal rate of return/ return on investment (ROI) and estimated tech debt – how much is being spent on the ratio of development to maintenance. Either can be key indicators of sub-optimal performance.

There are other benefits of benchmarking (listed below). Benchmarking …
... provides an independent, measurable verification of a team’s capability to perform against time and cost parameters by an objective 3rd party;

... signals management's willingness to pursue a philosophy that embraces change in a proactive rather than reactive manner;

... establishes meaningful goals and performance measures that reflect an external/customer focus, foster innovative thinking, and focus on high-payoff opportunities;

... creates organizational awareness of competitive disadvantage; and

... promotes teamwork that is based on competitive need and is driven by concrete data analysis, not intuition or gut feeling.

In summary, benchmarking would:

  • show you how you stack up against others and how you are performing internally;
  • act as a catalyst for change by setting realistic improvement targets; and
  • provide a path for the company toward optimal software practices.

How Do We Measure for Optimal Development?

Ultimately, the benchmarking exercise is to enable executive management to improve the performance of your software development using data-driven decisions to prioritize improvements. While the benchmark provides an evaluation of existing methods and outcomes against industry standard best practices, it also produces a gap analysis, producing recommendations for maximum return on investment (ROI).

The next step in the solution to software process optimization includes using a structured road map which includes the development of strategic goals based upon the benchmark results, and team analytics that are mapped to and support the strategic goals. From the road map exercise, scorecards and dashboards are developed for feedback to management.

The scorecard will combine an organization’s overall business strategy with the strategic goals set by the road map. These factors and the targets associated with them will reconcile to the desired state and alerts can be set up to identify situations whereby targets will not be met to enable future proactive actions. Generally, the scorecard is used to focus on long term solutions.

The formal definition of dashboards includes the identification and management of metrics within an interactive visual interface to enable the continual interaction and analysis of data. A dashboard is suited to a shorter cycle, or snapshot approach by providing varying types of visualizations to enable quicker decision making by showing charts, graphs, maps and gauges, each with its own metrics. A dashboard may be the visualization of the scorecard, or there may be hybrids of both.

Conclusion

Benchmarking and metrics modeling are the primary tools in recognizing and addressing sub-optimal development delivery to enable a company to become or stay competitive. By using the road map approach, the measurement and presentation of data for management use is key to recognizing and supporting optimal development processes.

Sources:

  • 1Dave Farley on the Rationale for Continuous Delivery; QCON London 2015, March 8, 2015.
  • 2Reference for Business, http://www.referenceforbusiness.com/management/A-Bud/Benchmarking.html.
  • “Benchmarking: A Tool for Continuous Improvement,” C.J. McNair and Kathleen H.J. Leibfried, 1993.
  • “Measuring Success - Benchmarking as a Tool for Optimizing Software Development Performance”, DCG Software Value (formerly David Consulting Group), 2015.  “A Closer Look at Scorecards And Dashboards”, Lyndsay Wise; April 27, 2010.
  • “Why Can’t We Estimate Better?” David Herron & Sheila P. Dennis, 2013.
Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!